Jan 13 20:54:02.120313 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:54:02.120364 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:54:02.120384 kernel: BIOS-provided physical RAM map: Jan 13 20:54:02.120399 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 20:54:02.120413 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 20:54:02.120426 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 20:54:02.120444 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 20:54:02.120465 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 20:54:02.120479 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd324fff] usable Jan 13 20:54:02.120491 kernel: BIOS-e820: [mem 0x00000000bd325000-0x00000000bd32dfff] ACPI data Jan 13 20:54:02.120504 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 13 20:54:02.120518 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 13 20:54:02.120530 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 20:54:02.120543 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 20:54:02.120564 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 20:54:02.120582 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 20:54:02.120597 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 20:54:02.120612 kernel: NX (Execute Disable) protection: active Jan 13 20:54:02.120627 kernel: APIC: Static calls initialized Jan 13 20:54:02.120643 kernel: efi: EFI v2.7 by EDK II Jan 13 20:54:02.120661 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd325018 Jan 13 20:54:02.120679 kernel: random: crng init done Jan 13 20:54:02.120696 kernel: secureboot: Secure boot disabled Jan 13 20:54:02.120711 kernel: SMBIOS 2.4 present. Jan 13 20:54:02.120733 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 20:54:02.120749 kernel: Hypervisor detected: KVM Jan 13 20:54:02.120766 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:54:02.120783 kernel: kvm-clock: using sched offset of 13023606109 cycles Jan 13 20:54:02.120801 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:54:02.120819 kernel: tsc: Detected 2299.998 MHz processor Jan 13 20:54:02.120836 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:54:02.120854 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:54:02.120871 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 20:54:02.120891 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 20:54:02.120906 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:54:02.120923 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 20:54:02.120937 kernel: Using GB pages for direct mapping Jan 13 20:54:02.120953 kernel: ACPI: Early table checksum verification disabled Jan 13 20:54:02.120969 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 20:54:02.120985 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 20:54:02.121007 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 20:54:02.121030 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 20:54:02.121048 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 20:54:02.121079 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 20:54:02.121095 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 20:54:02.121110 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 20:54:02.121126 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 20:54:02.121147 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 20:54:02.121163 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 20:54:02.121179 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 20:54:02.121195 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 20:54:02.121211 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 20:54:02.121227 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 20:54:02.121244 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 20:54:02.121267 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 20:54:02.121282 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 20:54:02.121304 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 20:54:02.121321 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 20:54:02.121337 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:54:02.121353 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:54:02.121370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:54:02.121387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 20:54:02.121403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 20:54:02.121419 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 20:54:02.121435 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 20:54:02.121456 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 20:54:02.121474 kernel: Zone ranges: Jan 13 20:54:02.121491 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:54:02.121506 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:54:02.121524 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:54:02.121539 kernel: Movable zone start for each node Jan 13 20:54:02.121555 kernel: Early memory node ranges Jan 13 20:54:02.121571 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 20:54:02.121588 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 20:54:02.121604 kernel: node 0: [mem 0x0000000000100000-0x00000000bd324fff] Jan 13 20:54:02.121626 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 13 20:54:02.121642 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 20:54:02.121658 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:54:02.121676 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 20:54:02.121692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:54:02.121710 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 20:54:02.121726 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 20:54:02.121743 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges Jan 13 20:54:02.121760 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:54:02.121781 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 20:54:02.121797 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:54:02.121813 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:54:02.121830 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:54:02.121846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:54:02.121864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:54:02.121881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:54:02.121897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:54:02.121915 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:54:02.121938 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:54:02.121955 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 20:54:02.121970 kernel: Booting paravirtualized kernel on KVM Jan 13 20:54:02.121984 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:54:02.121999 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:54:02.122016 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:54:02.122032 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:54:02.122046 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:54:02.122092 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:54:02.122117 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:54:02.122137 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:54:02.122155 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:54:02.122172 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 20:54:02.122191 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:54:02.122209 kernel: Fallback order for Node 0: 0 Jan 13 20:54:02.122227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932271 Jan 13 20:54:02.122247 kernel: Policy zone: Normal Jan 13 20:54:02.122278 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:54:02.122297 kernel: software IO TLB: area num 2. Jan 13 20:54:02.122316 kernel: Memory: 7513360K/7860548K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 346932K reserved, 0K cma-reserved) Jan 13 20:54:02.122333 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:54:02.122352 kernel: Kernel/User page tables isolation: enabled Jan 13 20:54:02.122371 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:54:02.122388 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:54:02.122407 kernel: Dynamic Preempt: voluntary Jan 13 20:54:02.122444 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:54:02.122465 kernel: rcu: RCU event tracing is enabled. Jan 13 20:54:02.122484 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:54:02.122504 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:54:02.122527 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:54:02.122546 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:54:02.122565 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:54:02.122585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:54:02.122605 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:54:02.122629 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:54:02.122648 kernel: Console: colour dummy device 80x25 Jan 13 20:54:02.122668 kernel: printk: console [ttyS0] enabled Jan 13 20:54:02.122687 kernel: ACPI: Core revision 20230628 Jan 13 20:54:02.122706 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:54:02.122726 kernel: x2apic enabled Jan 13 20:54:02.122745 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:54:02.122765 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 20:54:02.122784 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:54:02.122807 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 20:54:02.122826 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 20:54:02.122843 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 20:54:02.122863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:54:02.122882 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:54:02.122901 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:54:02.122920 kernel: Spectre V2 : Mitigation: IBRS Jan 13 20:54:02.122939 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:54:02.122962 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:54:02.122981 kernel: RETBleed: Mitigation: IBRS Jan 13 20:54:02.123000 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:54:02.123020 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 20:54:02.123036 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:54:02.123054 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 20:54:02.123097 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:54:02.123116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:54:02.123136 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:54:02.123161 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:54:02.123180 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:54:02.123200 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 20:54:02.123217 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:54:02.123235 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:54:02.123262 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:54:02.123281 kernel: landlock: Up and running. Jan 13 20:54:02.123300 kernel: SELinux: Initializing. Jan 13 20:54:02.123320 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.123344 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.123363 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 20:54:02.123382 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:54:02.123402 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:54:02.123422 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:54:02.123442 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 20:54:02.123461 kernel: signal: max sigframe size: 1776 Jan 13 20:54:02.123480 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:54:02.123500 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:54:02.123523 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:54:02.123541 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:54:02.123561 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:54:02.123581 kernel: .... node #0, CPUs: #1 Jan 13 20:54:02.123601 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:54:02.123622 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:54:02.123641 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:54:02.123661 kernel: smpboot: Max logical packages: 1 Jan 13 20:54:02.123684 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 20:54:02.123704 kernel: devtmpfs: initialized Jan 13 20:54:02.123724 kernel: x86/mm: Memory block size: 128MB Jan 13 20:54:02.123742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 20:54:02.123762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:54:02.123782 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:54:02.123802 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:54:02.123821 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:54:02.123841 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:54:02.123864 kernel: audit: type=2000 audit(1736801640.763:1): state=initialized audit_enabled=0 res=1 Jan 13 20:54:02.123884 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:54:02.123904 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:54:02.123922 kernel: cpuidle: using governor menu Jan 13 20:54:02.123942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:54:02.123962 kernel: dca service started, version 1.12.1 Jan 13 20:54:02.123981 kernel: PCI: Using configuration type 1 for base access Jan 13 20:54:02.124000 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:54:02.124015 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:54:02.124036 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:54:02.124069 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:54:02.124096 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:54:02.124111 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:54:02.124127 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:54:02.124143 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:54:02.124157 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:54:02.124173 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:54:02.124189 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:54:02.124213 kernel: ACPI: Interpreter enabled Jan 13 20:54:02.124232 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:54:02.124251 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:54:02.124277 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:54:02.124299 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 20:54:02.124316 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:54:02.124335 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:54:02.124612 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:54:02.124814 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:54:02.124993 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:54:02.125016 kernel: PCI host bridge to bus 0000:00 Jan 13 20:54:02.125247 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:54:02.125452 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:54:02.125641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:54:02.125810 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 20:54:02.125987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:54:02.126243 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:54:02.126720 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 20:54:02.126990 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:54:02.127227 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:54:02.127430 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 20:54:02.127644 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 20:54:02.127834 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 20:54:02.128030 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:54:02.128254 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 20:54:02.128446 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 20:54:02.128656 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:54:02.128848 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 20:54:02.129046 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 20:54:02.129100 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:54:02.129121 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:54:02.129139 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:54:02.129158 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:54:02.129177 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:54:02.129195 kernel: iommu: Default domain type: Translated Jan 13 20:54:02.129214 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:54:02.129232 kernel: efivars: Registered efivars operations Jan 13 20:54:02.129257 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:54:02.129277 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:54:02.129296 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 20:54:02.129315 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 20:54:02.129333 kernel: e820: reserve RAM buffer [mem 0xbd325000-0xbfffffff] Jan 13 20:54:02.129352 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 20:54:02.129370 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 20:54:02.129388 kernel: vgaarb: loaded Jan 13 20:54:02.129406 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:54:02.129429 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:54:02.129449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:54:02.129469 kernel: pnp: PnP ACPI init Jan 13 20:54:02.129496 kernel: pnp: PnP ACPI: found 7 devices Jan 13 20:54:02.129516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:54:02.129536 kernel: NET: Registered PF_INET protocol family Jan 13 20:54:02.129554 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:54:02.129574 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 20:54:02.129599 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:54:02.129619 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:54:02.129639 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 20:54:02.129659 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 20:54:02.129676 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.129703 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.129728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:54:02.129748 kernel: NET: Registered PF_XDP protocol family Jan 13 20:54:02.129944 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:54:02.130191 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:54:02.130360 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:54:02.130530 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 20:54:02.130718 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:54:02.130742 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:54:02.130761 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:54:02.130780 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 20:54:02.130805 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:54:02.130824 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:54:02.130843 kernel: clocksource: Switched to clocksource tsc Jan 13 20:54:02.130862 kernel: Initialise system trusted keyrings Jan 13 20:54:02.130880 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 20:54:02.130898 kernel: Key type asymmetric registered Jan 13 20:54:02.130916 kernel: Asymmetric key parser 'x509' registered Jan 13 20:54:02.130934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:54:02.130953 kernel: io scheduler mq-deadline registered Jan 13 20:54:02.130975 kernel: io scheduler kyber registered Jan 13 20:54:02.130994 kernel: io scheduler bfq registered Jan 13 20:54:02.131012 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:54:02.131033 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:54:02.131269 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 20:54:02.131294 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 20:54:02.131469 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 20:54:02.131500 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:54:02.131677 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 20:54:02.131705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:54:02.131724 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:54:02.131742 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 20:54:02.131760 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 20:54:02.131779 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 20:54:02.131962 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 20:54:02.131988 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:54:02.132006 kernel: i8042: Warning: Keylock active Jan 13 20:54:02.132029 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:54:02.132048 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:54:02.132244 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:54:02.132412 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:54:02.132588 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:54:01 UTC (1736801641) Jan 13 20:54:02.132753 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:54:02.132777 kernel: intel_pstate: CPU model not supported Jan 13 20:54:02.132796 kernel: pstore: Using crash dump compression: deflate Jan 13 20:54:02.132820 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:54:02.132839 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:54:02.132857 kernel: Segment Routing with IPv6 Jan 13 20:54:02.132876 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:54:02.132895 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:54:02.132913 kernel: Key type dns_resolver registered Jan 13 20:54:02.132931 kernel: IPI shorthand broadcast: enabled Jan 13 20:54:02.132949 kernel: sched_clock: Marking stable (898004536, 191685512)->(1117403573, -27713525) Jan 13 20:54:02.132967 kernel: registered taskstats version 1 Jan 13 20:54:02.132989 kernel: Loading compiled-in X.509 certificates Jan 13 20:54:02.133008 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:54:02.133026 kernel: Key type .fscrypt registered Jan 13 20:54:02.133044 kernel: Key type fscrypt-provisioning registered Jan 13 20:54:02.133086 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:54:02.133104 kernel: ima: No architecture policies found Jan 13 20:54:02.133123 kernel: clk: Disabling unused clocks Jan 13 20:54:02.133141 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:54:02.133160 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:54:02.133183 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:54:02.133201 kernel: Run /init as init process Jan 13 20:54:02.133219 kernel: with arguments: Jan 13 20:54:02.133236 kernel: /init Jan 13 20:54:02.133255 kernel: with environment: Jan 13 20:54:02.133273 kernel: HOME=/ Jan 13 20:54:02.133291 kernel: TERM=linux Jan 13 20:54:02.133309 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:54:02.133327 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:54:02.133355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:54:02.133377 systemd[1]: Detected virtualization google. Jan 13 20:54:02.133397 systemd[1]: Detected architecture x86-64. Jan 13 20:54:02.133415 systemd[1]: Running in initrd. Jan 13 20:54:02.133434 systemd[1]: No hostname configured, using default hostname. Jan 13 20:54:02.133453 systemd[1]: Hostname set to . Jan 13 20:54:02.133472 systemd[1]: Initializing machine ID from random generator. Jan 13 20:54:02.133504 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:54:02.133524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:54:02.133543 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:54:02.133563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:54:02.133582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:54:02.133601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:54:02.133621 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:54:02.133647 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:54:02.133684 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:54:02.133708 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:54:02.133729 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:54:02.133749 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:54:02.133776 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:54:02.133797 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:54:02.133817 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:54:02.133837 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:54:02.133857 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:54:02.133877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:54:02.133898 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:54:02.133918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:54:02.133938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:54:02.133962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:54:02.133982 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:54:02.134002 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:54:02.134023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:54:02.134043 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:54:02.134106 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:54:02.134127 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:54:02.134147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:54:02.134167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:54:02.134232 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:54:02.134276 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:54:02.134297 systemd-journald[184]: Journal started Jan 13 20:54:02.134340 systemd-journald[184]: Runtime Journal (/run/log/journal/2f624896666b4a128c85dac7c6fe77b9) is 8.0M, max 148.7M, 140.7M free. Jan 13 20:54:02.137266 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:54:02.140880 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 20:54:02.142773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:54:02.148966 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:54:02.173353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:54:02.183840 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:54:02.191634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:54:02.191679 kernel: Bridge firewalling registered Jan 13 20:54:02.190917 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 20:54:02.197763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:54:02.202556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:02.207658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:54:02.212731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:54:02.223343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:54:02.237353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:54:02.253124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:54:02.272349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:54:02.281329 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:54:02.284863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:02.291399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:54:02.305324 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:54:02.333614 dracut-cmdline[219]: dracut-dracut-053 Jan 13 20:54:02.338926 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:54:02.345807 systemd-resolved[212]: Positive Trust Anchors: Jan 13 20:54:02.345826 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:54:02.345895 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:54:02.351435 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 13 20:54:02.354683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:54:02.361859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:54:02.441102 kernel: SCSI subsystem initialized Jan 13 20:54:02.452100 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:54:02.464109 kernel: iscsi: registered transport (tcp) Jan 13 20:54:02.487234 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:54:02.487322 kernel: QLogic iSCSI HBA Driver Jan 13 20:54:02.540249 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:54:02.547334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:54:02.588674 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:54:02.588797 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:54:02.588851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:54:02.635147 kernel: raid6: avx2x4 gen() 17667 MB/s Jan 13 20:54:02.652115 kernel: raid6: avx2x2 gen() 18458 MB/s Jan 13 20:54:02.669754 kernel: raid6: avx2x1 gen() 14532 MB/s Jan 13 20:54:02.669838 kernel: raid6: using algorithm avx2x2 gen() 18458 MB/s Jan 13 20:54:02.687625 kernel: raid6: .... xor() 17597 MB/s, rmw enabled Jan 13 20:54:02.687734 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:54:02.711104 kernel: xor: automatically using best checksumming function avx Jan 13 20:54:02.891095 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:54:02.905182 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:54:02.913294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:54:02.946599 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 13 20:54:02.953876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:54:02.984298 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:54:03.027250 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 13 20:54:03.068266 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:54:03.073478 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:54:03.186849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:54:03.205320 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:54:03.260232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:54:03.289258 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:54:03.288847 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:54:03.305103 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 20:54:03.315093 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:54:03.321696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:54:03.342350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:54:03.370804 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:54:03.370880 kernel: AES CTR mode by8 optimization enabled Jan 13 20:54:03.378430 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:54:03.434919 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 20:54:03.507796 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 20:54:03.508043 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 20:54:03.508267 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 20:54:03.508420 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:54:03.508582 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:54:03.508599 kernel: GPT:17805311 != 25165823 Jan 13 20:54:03.508613 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:54:03.508628 kernel: GPT:17805311 != 25165823 Jan 13 20:54:03.508641 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:54:03.508656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:54:03.508671 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 20:54:03.441700 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:54:03.441898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:03.462463 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:54:03.509018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:54:03.575456 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (450) Jan 13 20:54:03.509350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:03.606887 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (453) Jan 13 20:54:03.531415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:54:03.604526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:54:03.617977 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:54:03.654465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:03.668760 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 20:54:03.696840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 20:54:03.703471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 20:54:03.732810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 20:54:03.748242 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 20:54:03.779319 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:54:03.795810 disk-uuid[539]: Primary Header is updated. Jan 13 20:54:03.795810 disk-uuid[539]: Secondary Entries is updated. Jan 13 20:54:03.795810 disk-uuid[539]: Secondary Header is updated. Jan 13 20:54:03.833215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:54:03.814423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:54:03.910154 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:04.842023 disk-uuid[540]: The operation has completed successfully. Jan 13 20:54:04.851315 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:54:04.926413 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:54:04.926566 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:54:04.945307 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:54:04.976457 sh[563]: Success Jan 13 20:54:05.000089 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:54:05.089536 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:54:05.096911 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:54:05.121693 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:54:05.171098 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:54:05.171190 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:05.171217 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:54:05.180522 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:54:05.187353 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:54:05.217096 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:54:05.224225 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:54:05.225619 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:54:05.230309 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:54:05.243312 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:54:05.316159 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:05.316215 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:05.316240 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:54:05.335326 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:54:05.335417 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:54:05.350720 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:54:05.365265 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:05.376969 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:54:05.403326 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:54:05.459264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:54:05.488526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:54:05.541315 systemd-networkd[746]: lo: Link UP Jan 13 20:54:05.541700 systemd-networkd[746]: lo: Gained carrier Jan 13 20:54:05.547544 systemd-networkd[746]: Enumeration completed Jan 13 20:54:05.549161 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:54:05.549195 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:54:05.549202 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:54:05.558090 systemd-networkd[746]: eth0: Link UP Jan 13 20:54:05.558097 systemd-networkd[746]: eth0: Gained carrier Jan 13 20:54:05.604307 ignition[692]: Ignition 2.20.0 Jan 13 20:54:05.558115 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:54:05.604317 ignition[692]: Stage: fetch-offline Jan 13 20:54:05.570327 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.13/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 20:54:05.604372 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.622694 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:54:05.604383 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.640011 systemd[1]: Reached target network.target - Network. Jan 13 20:54:05.604502 ignition[692]: parsed url from cmdline: "" Jan 13 20:54:05.672719 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:54:05.604509 ignition[692]: no config URL provided Jan 13 20:54:05.719997 unknown[754]: fetched base config from "system" Jan 13 20:54:05.604518 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:54:05.720023 unknown[754]: fetched base config from "system" Jan 13 20:54:05.604532 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:54:05.720042 unknown[754]: fetched user config from "gcp" Jan 13 20:54:05.604540 ignition[692]: failed to fetch config: resource requires networking Jan 13 20:54:05.724291 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:54:05.604814 ignition[692]: Ignition finished successfully Jan 13 20:54:05.747304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:54:05.703939 ignition[754]: Ignition 2.20.0 Jan 13 20:54:05.784768 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:54:05.703949 ignition[754]: Stage: fetch Jan 13 20:54:05.806316 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:54:05.704251 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.843628 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:54:05.704264 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.850598 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:54:05.704385 ignition[754]: parsed url from cmdline: "" Jan 13 20:54:05.878392 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:54:05.704392 ignition[754]: no config URL provided Jan 13 20:54:05.884443 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:54:05.704402 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:54:05.912432 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:54:05.704416 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:54:05.918491 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:54:05.704451 ignition[754]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 20:54:05.942415 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:54:05.710505 ignition[754]: GET result: OK Jan 13 20:54:05.710590 ignition[754]: parsing config with SHA512: e690b5ccdbd11a035e54fd96aa680d44ad10cef6ef124b0165ab7e0c0ffa8556d6085df905b282dac04a6c18b616b67dba4bf49d8fd7640ac40e9c6dd1aeb2dd Jan 13 20:54:05.722384 ignition[754]: fetch: fetch complete Jan 13 20:54:05.722402 ignition[754]: fetch: fetch passed Jan 13 20:54:05.722502 ignition[754]: Ignition finished successfully Jan 13 20:54:05.782304 ignition[760]: Ignition 2.20.0 Jan 13 20:54:05.782314 ignition[760]: Stage: kargs Jan 13 20:54:05.782510 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.782522 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.783617 ignition[760]: kargs: kargs passed Jan 13 20:54:05.783681 ignition[760]: Ignition finished successfully Jan 13 20:54:05.831532 ignition[766]: Ignition 2.20.0 Jan 13 20:54:05.831563 ignition[766]: Stage: disks Jan 13 20:54:05.831767 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.831779 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.832763 ignition[766]: disks: disks passed Jan 13 20:54:05.832819 ignition[766]: Ignition finished successfully Jan 13 20:54:06.006786 systemd-fsck[775]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:54:06.148210 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:54:06.153276 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:54:06.312531 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:54:06.313484 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:54:06.314419 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:54:06.334368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:54:06.364249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:54:06.412270 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (783) Jan 13 20:54:06.412347 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:06.412373 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:06.412396 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:54:06.403976 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:54:06.448262 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:54:06.448317 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:54:06.404039 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:54:06.404096 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:54:06.429926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:54:06.475844 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:54:06.499320 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:54:06.614209 initrd-setup-root[807]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:54:06.624244 initrd-setup-root[814]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:54:06.632835 initrd-setup-root[821]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:54:06.643255 initrd-setup-root[828]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:54:06.784999 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:54:06.813275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:54:06.816322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:54:06.837132 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:06.853564 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:54:06.901347 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:54:06.911753 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:54:06.937428 ignition[896]: INFO : Ignition 2.20.0 Jan 13 20:54:06.937428 ignition[896]: INFO : Stage: mount Jan 13 20:54:06.937428 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:06.937428 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:06.937428 ignition[896]: INFO : mount: mount passed Jan 13 20:54:06.937428 ignition[896]: INFO : Ignition finished successfully Jan 13 20:54:06.935273 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:54:07.326359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:54:07.372765 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (908) Jan 13 20:54:07.372813 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:07.372937 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:07.373006 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:54:07.388482 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:54:07.388581 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:54:07.391523 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:54:07.432704 ignition[925]: INFO : Ignition 2.20.0 Jan 13 20:54:07.432704 ignition[925]: INFO : Stage: files Jan 13 20:54:07.448214 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:07.448214 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:07.448214 ignition[925]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:54:07.448214 ignition[925]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:54:07.448214 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:54:07.442274 unknown[925]: wrote ssh authorized keys file for user: core Jan 13 20:54:07.549245 systemd-networkd[746]: eth0: Gained IPv6LL Jan 13 20:54:07.593195 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:54:07.817986 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:54:07.835442 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:54:07.835442 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:54:08.091219 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:54:08.232834 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:54:08.487341 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:54:08.713957 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.713957 ignition[925]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:54:08.752284 ignition[925]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:54:08.752284 ignition[925]: INFO : files: files passed Jan 13 20:54:08.752284 ignition[925]: INFO : Ignition finished successfully Jan 13 20:54:08.718127 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:54:08.738374 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:54:08.776374 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:54:08.827755 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:54:08.968284 initrd-setup-root-after-ignition[952]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:54:08.968284 initrd-setup-root-after-ignition[952]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:54:08.827932 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:54:09.027280 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:54:08.841302 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:54:08.852618 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:54:08.882322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:54:08.968388 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:54:08.968682 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:54:08.982679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:54:09.017412 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:54:09.037486 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:54:09.044617 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:54:09.085310 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:54:09.113333 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:54:09.149312 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:54:09.149431 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:54:09.168321 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:54:09.191401 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:54:09.202670 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:54:09.222575 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:54:09.222675 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:54:09.258681 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:54:09.278521 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:54:09.311357 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:54:09.320496 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:54:09.340626 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:54:09.358504 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:54:09.393275 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:54:09.403517 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:54:09.423452 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:54:09.440484 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:54:09.474350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:54:09.474474 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:54:09.500532 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:54:09.527443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:54:09.537481 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:54:09.537584 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:54:09.557479 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:54:09.557565 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:54:09.596560 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:54:09.596659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:54:09.605511 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:54:09.605585 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:54:09.672381 ignition[978]: INFO : Ignition 2.20.0 Jan 13 20:54:09.672381 ignition[978]: INFO : Stage: umount Jan 13 20:54:09.672381 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:09.672381 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:09.672381 ignition[978]: INFO : umount: umount passed Jan 13 20:54:09.672381 ignition[978]: INFO : Ignition finished successfully Jan 13 20:54:09.632327 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:54:09.680239 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:54:09.680367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:54:09.706252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:54:09.743384 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:54:09.743506 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:54:09.751563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:54:09.751638 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:54:09.792091 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:54:09.792891 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:54:09.793014 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:54:09.812717 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:54:09.812844 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:54:09.821875 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:54:09.821949 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:54:09.837517 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:54:09.837594 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:54:09.855553 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:54:09.855622 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:54:09.872519 systemd[1]: Stopped target network.target - Network. Jan 13 20:54:09.889480 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:54:09.889569 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:54:09.905519 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:54:09.932257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:54:09.935268 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:54:09.940438 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:54:09.958495 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:54:09.973542 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:54:09.973606 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:54:09.988547 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:54:09.988611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:54:10.022445 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:54:10.022533 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:54:10.030528 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:54:10.030602 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:54:10.064457 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:54:10.064538 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:54:10.072749 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:54:10.078181 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 13 20:54:10.100615 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:54:10.120781 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:54:10.120927 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:54:10.140774 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:54:10.141105 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:54:10.150528 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:54:10.150588 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:54:10.176222 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:54:10.203211 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:54:10.203336 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:54:10.221402 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:54:10.221494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:54:10.239317 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:54:10.239414 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:54:10.257477 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:54:10.257602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:54:10.278497 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:54:10.298810 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:54:10.663244 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:54:10.299005 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:54:10.313586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:54:10.313654 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:54:10.334437 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:54:10.334496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:54:10.354492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:54:10.354569 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:54:10.390461 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:54:10.390567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:54:10.416500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:54:10.416586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:10.451324 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:54:10.463410 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:54:10.463494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:54:10.480520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:54:10.480594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:10.511962 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:54:10.512116 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:54:10.531622 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:54:10.531744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:54:10.553560 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:54:10.569428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:54:10.613156 systemd[1]: Switching root. Jan 13 20:54:10.890207 systemd-journald[184]: Journal stopped Jan 13 20:54:02.120313 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:54:02.120364 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:54:02.120384 kernel: BIOS-provided physical RAM map: Jan 13 20:54:02.120399 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 20:54:02.120413 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 20:54:02.120426 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 20:54:02.120444 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 20:54:02.120465 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 20:54:02.120479 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd324fff] usable Jan 13 20:54:02.120491 kernel: BIOS-e820: [mem 0x00000000bd325000-0x00000000bd32dfff] ACPI data Jan 13 20:54:02.120504 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 13 20:54:02.120518 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 13 20:54:02.120530 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 20:54:02.120543 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 20:54:02.120564 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 20:54:02.120582 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 20:54:02.120597 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 20:54:02.120612 kernel: NX (Execute Disable) protection: active Jan 13 20:54:02.120627 kernel: APIC: Static calls initialized Jan 13 20:54:02.120643 kernel: efi: EFI v2.7 by EDK II Jan 13 20:54:02.120661 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd325018 Jan 13 20:54:02.120679 kernel: random: crng init done Jan 13 20:54:02.120696 kernel: secureboot: Secure boot disabled Jan 13 20:54:02.120711 kernel: SMBIOS 2.4 present. Jan 13 20:54:02.120733 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 20:54:02.120749 kernel: Hypervisor detected: KVM Jan 13 20:54:02.120766 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:54:02.120783 kernel: kvm-clock: using sched offset of 13023606109 cycles Jan 13 20:54:02.120801 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:54:02.120819 kernel: tsc: Detected 2299.998 MHz processor Jan 13 20:54:02.120836 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:54:02.120854 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:54:02.120871 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 20:54:02.120891 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 20:54:02.120906 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:54:02.120923 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 20:54:02.120937 kernel: Using GB pages for direct mapping Jan 13 20:54:02.120953 kernel: ACPI: Early table checksum verification disabled Jan 13 20:54:02.120969 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 20:54:02.120985 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 20:54:02.121007 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 20:54:02.121030 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 20:54:02.121048 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 20:54:02.121079 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 20:54:02.121095 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 20:54:02.121110 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 20:54:02.121126 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 20:54:02.121147 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 20:54:02.121163 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 20:54:02.121179 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 20:54:02.121195 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 20:54:02.121211 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 20:54:02.121227 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 20:54:02.121244 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 20:54:02.121267 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 20:54:02.121282 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 20:54:02.121304 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 20:54:02.121321 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 20:54:02.121337 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:54:02.121353 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:54:02.121370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:54:02.121387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 20:54:02.121403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 20:54:02.121419 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 20:54:02.121435 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 20:54:02.121456 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 20:54:02.121474 kernel: Zone ranges: Jan 13 20:54:02.121491 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:54:02.121506 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:54:02.121524 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:54:02.121539 kernel: Movable zone start for each node Jan 13 20:54:02.121555 kernel: Early memory node ranges Jan 13 20:54:02.121571 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 20:54:02.121588 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 20:54:02.121604 kernel: node 0: [mem 0x0000000000100000-0x00000000bd324fff] Jan 13 20:54:02.121626 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 13 20:54:02.121642 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 20:54:02.121658 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:54:02.121676 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 20:54:02.121692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:54:02.121710 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 20:54:02.121726 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 20:54:02.121743 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges Jan 13 20:54:02.121760 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:54:02.121781 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 20:54:02.121797 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:54:02.121813 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:54:02.121830 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:54:02.121846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:54:02.121864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:54:02.121881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:54:02.121897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:54:02.121915 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:54:02.121938 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:54:02.121955 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 20:54:02.121970 kernel: Booting paravirtualized kernel on KVM Jan 13 20:54:02.121984 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:54:02.121999 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:54:02.122016 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:54:02.122032 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:54:02.122046 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:54:02.122092 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:54:02.122117 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:54:02.122137 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:54:02.122155 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:54:02.122172 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 20:54:02.122191 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:54:02.122209 kernel: Fallback order for Node 0: 0 Jan 13 20:54:02.122227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932271 Jan 13 20:54:02.122247 kernel: Policy zone: Normal Jan 13 20:54:02.122278 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:54:02.122297 kernel: software IO TLB: area num 2. Jan 13 20:54:02.122316 kernel: Memory: 7513360K/7860548K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 346932K reserved, 0K cma-reserved) Jan 13 20:54:02.122333 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:54:02.122352 kernel: Kernel/User page tables isolation: enabled Jan 13 20:54:02.122371 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:54:02.122388 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:54:02.122407 kernel: Dynamic Preempt: voluntary Jan 13 20:54:02.122444 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:54:02.122465 kernel: rcu: RCU event tracing is enabled. Jan 13 20:54:02.122484 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:54:02.122504 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:54:02.122527 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:54:02.122546 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:54:02.122565 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:54:02.122585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:54:02.122605 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:54:02.122629 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:54:02.122648 kernel: Console: colour dummy device 80x25 Jan 13 20:54:02.122668 kernel: printk: console [ttyS0] enabled Jan 13 20:54:02.122687 kernel: ACPI: Core revision 20230628 Jan 13 20:54:02.122706 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:54:02.122726 kernel: x2apic enabled Jan 13 20:54:02.122745 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:54:02.122765 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 20:54:02.122784 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:54:02.122807 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 20:54:02.122826 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 20:54:02.122843 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 20:54:02.122863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:54:02.122882 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:54:02.122901 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:54:02.122920 kernel: Spectre V2 : Mitigation: IBRS Jan 13 20:54:02.122939 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:54:02.122962 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:54:02.122981 kernel: RETBleed: Mitigation: IBRS Jan 13 20:54:02.123000 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:54:02.123020 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 20:54:02.123036 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:54:02.123054 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 20:54:02.123097 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:54:02.123116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:54:02.123136 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:54:02.123161 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:54:02.123180 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:54:02.123200 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 20:54:02.123217 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:54:02.123235 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:54:02.123262 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:54:02.123281 kernel: landlock: Up and running. Jan 13 20:54:02.123300 kernel: SELinux: Initializing. Jan 13 20:54:02.123320 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.123344 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.123363 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 20:54:02.123382 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:54:02.123402 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:54:02.123422 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:54:02.123442 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 20:54:02.123461 kernel: signal: max sigframe size: 1776 Jan 13 20:54:02.123480 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:54:02.123500 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:54:02.123523 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:54:02.123541 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:54:02.123561 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:54:02.123581 kernel: .... node #0, CPUs: #1 Jan 13 20:54:02.123601 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:54:02.123622 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:54:02.123641 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:54:02.123661 kernel: smpboot: Max logical packages: 1 Jan 13 20:54:02.123684 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 20:54:02.123704 kernel: devtmpfs: initialized Jan 13 20:54:02.123724 kernel: x86/mm: Memory block size: 128MB Jan 13 20:54:02.123742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 20:54:02.123762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:54:02.123782 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:54:02.123802 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:54:02.123821 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:54:02.123841 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:54:02.123864 kernel: audit: type=2000 audit(1736801640.763:1): state=initialized audit_enabled=0 res=1 Jan 13 20:54:02.123884 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:54:02.123904 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:54:02.123922 kernel: cpuidle: using governor menu Jan 13 20:54:02.123942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:54:02.123962 kernel: dca service started, version 1.12.1 Jan 13 20:54:02.123981 kernel: PCI: Using configuration type 1 for base access Jan 13 20:54:02.124000 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:54:02.124015 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:54:02.124036 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:54:02.124069 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:54:02.124096 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:54:02.124111 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:54:02.124127 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:54:02.124143 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:54:02.124157 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:54:02.124173 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:54:02.124189 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:54:02.124213 kernel: ACPI: Interpreter enabled Jan 13 20:54:02.124232 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:54:02.124251 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:54:02.124277 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:54:02.124299 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 20:54:02.124316 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:54:02.124335 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:54:02.124612 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:54:02.124814 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:54:02.124993 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:54:02.125016 kernel: PCI host bridge to bus 0000:00 Jan 13 20:54:02.125247 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:54:02.125452 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:54:02.125641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:54:02.125810 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 20:54:02.125987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:54:02.126243 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:54:02.126720 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 20:54:02.126990 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:54:02.127227 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:54:02.127430 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 20:54:02.127644 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 20:54:02.127834 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 20:54:02.128030 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:54:02.128254 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 20:54:02.128446 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 20:54:02.128656 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:54:02.128848 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 20:54:02.129046 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 20:54:02.129100 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:54:02.129121 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:54:02.129139 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:54:02.129158 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:54:02.129177 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:54:02.129195 kernel: iommu: Default domain type: Translated Jan 13 20:54:02.129214 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:54:02.129232 kernel: efivars: Registered efivars operations Jan 13 20:54:02.129257 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:54:02.129277 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:54:02.129296 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 20:54:02.129315 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 20:54:02.129333 kernel: e820: reserve RAM buffer [mem 0xbd325000-0xbfffffff] Jan 13 20:54:02.129352 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 20:54:02.129370 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 20:54:02.129388 kernel: vgaarb: loaded Jan 13 20:54:02.129406 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:54:02.129429 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:54:02.129449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:54:02.129469 kernel: pnp: PnP ACPI init Jan 13 20:54:02.129496 kernel: pnp: PnP ACPI: found 7 devices Jan 13 20:54:02.129516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:54:02.129536 kernel: NET: Registered PF_INET protocol family Jan 13 20:54:02.129554 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:54:02.129574 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 20:54:02.129599 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:54:02.129619 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:54:02.129639 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 20:54:02.129659 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 20:54:02.129676 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.129703 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:54:02.129728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:54:02.129748 kernel: NET: Registered PF_XDP protocol family Jan 13 20:54:02.129944 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:54:02.130191 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:54:02.130360 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:54:02.130530 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 20:54:02.130718 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:54:02.130742 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:54:02.130761 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:54:02.130780 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 20:54:02.130805 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:54:02.130824 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:54:02.130843 kernel: clocksource: Switched to clocksource tsc Jan 13 20:54:02.130862 kernel: Initialise system trusted keyrings Jan 13 20:54:02.130880 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 20:54:02.130898 kernel: Key type asymmetric registered Jan 13 20:54:02.130916 kernel: Asymmetric key parser 'x509' registered Jan 13 20:54:02.130934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:54:02.130953 kernel: io scheduler mq-deadline registered Jan 13 20:54:02.130975 kernel: io scheduler kyber registered Jan 13 20:54:02.130994 kernel: io scheduler bfq registered Jan 13 20:54:02.131012 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:54:02.131033 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:54:02.131269 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 20:54:02.131294 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 20:54:02.131469 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 20:54:02.131500 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:54:02.131677 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 20:54:02.131705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:54:02.131724 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:54:02.131742 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 20:54:02.131760 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 20:54:02.131779 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 20:54:02.131962 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 20:54:02.131988 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:54:02.132006 kernel: i8042: Warning: Keylock active Jan 13 20:54:02.132029 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:54:02.132048 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:54:02.132244 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:54:02.132412 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:54:02.132588 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:54:01 UTC (1736801641) Jan 13 20:54:02.132753 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:54:02.132777 kernel: intel_pstate: CPU model not supported Jan 13 20:54:02.132796 kernel: pstore: Using crash dump compression: deflate Jan 13 20:54:02.132820 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:54:02.132839 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:54:02.132857 kernel: Segment Routing with IPv6 Jan 13 20:54:02.132876 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:54:02.132895 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:54:02.132913 kernel: Key type dns_resolver registered Jan 13 20:54:02.132931 kernel: IPI shorthand broadcast: enabled Jan 13 20:54:02.132949 kernel: sched_clock: Marking stable (898004536, 191685512)->(1117403573, -27713525) Jan 13 20:54:02.132967 kernel: registered taskstats version 1 Jan 13 20:54:02.132989 kernel: Loading compiled-in X.509 certificates Jan 13 20:54:02.133008 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:54:02.133026 kernel: Key type .fscrypt registered Jan 13 20:54:02.133044 kernel: Key type fscrypt-provisioning registered Jan 13 20:54:02.133086 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:54:02.133104 kernel: ima: No architecture policies found Jan 13 20:54:02.133123 kernel: clk: Disabling unused clocks Jan 13 20:54:02.133141 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:54:02.133160 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:54:02.133183 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:54:02.133201 kernel: Run /init as init process Jan 13 20:54:02.133219 kernel: with arguments: Jan 13 20:54:02.133236 kernel: /init Jan 13 20:54:02.133255 kernel: with environment: Jan 13 20:54:02.133273 kernel: HOME=/ Jan 13 20:54:02.133291 kernel: TERM=linux Jan 13 20:54:02.133309 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:54:02.133327 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:54:02.133355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:54:02.133377 systemd[1]: Detected virtualization google. Jan 13 20:54:02.133397 systemd[1]: Detected architecture x86-64. Jan 13 20:54:02.133415 systemd[1]: Running in initrd. Jan 13 20:54:02.133434 systemd[1]: No hostname configured, using default hostname. Jan 13 20:54:02.133453 systemd[1]: Hostname set to . Jan 13 20:54:02.133472 systemd[1]: Initializing machine ID from random generator. Jan 13 20:54:02.133504 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:54:02.133524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:54:02.133543 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:54:02.133563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:54:02.133582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:54:02.133601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:54:02.133621 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:54:02.133647 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:54:02.133684 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:54:02.133708 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:54:02.133729 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:54:02.133749 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:54:02.133776 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:54:02.133797 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:54:02.133817 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:54:02.133837 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:54:02.133857 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:54:02.133877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:54:02.133898 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:54:02.133918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:54:02.133938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:54:02.133962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:54:02.133982 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:54:02.134002 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:54:02.134023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:54:02.134043 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:54:02.134106 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:54:02.134127 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:54:02.134147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:54:02.134167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:54:02.134232 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:54:02.134276 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:54:02.134297 systemd-journald[184]: Journal started Jan 13 20:54:02.134340 systemd-journald[184]: Runtime Journal (/run/log/journal/2f624896666b4a128c85dac7c6fe77b9) is 8.0M, max 148.7M, 140.7M free. Jan 13 20:54:02.137266 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:54:02.140880 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 20:54:02.142773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:54:02.148966 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:54:02.173353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:54:02.183840 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:54:02.191634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:54:02.191679 kernel: Bridge firewalling registered Jan 13 20:54:02.190917 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 20:54:02.197763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:54:02.202556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:02.207658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:54:02.212731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:54:02.223343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:54:02.237353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:54:02.253124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:54:02.272349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:54:02.281329 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:54:02.284863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:02.291399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:54:02.305324 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:54:02.333614 dracut-cmdline[219]: dracut-dracut-053 Jan 13 20:54:02.338926 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:54:02.345807 systemd-resolved[212]: Positive Trust Anchors: Jan 13 20:54:02.345826 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:54:02.345895 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:54:02.351435 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 13 20:54:02.354683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:54:02.361859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:54:02.441102 kernel: SCSI subsystem initialized Jan 13 20:54:02.452100 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:54:02.464109 kernel: iscsi: registered transport (tcp) Jan 13 20:54:02.487234 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:54:02.487322 kernel: QLogic iSCSI HBA Driver Jan 13 20:54:02.540249 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:54:02.547334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:54:02.588674 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:54:02.588797 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:54:02.588851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:54:02.635147 kernel: raid6: avx2x4 gen() 17667 MB/s Jan 13 20:54:02.652115 kernel: raid6: avx2x2 gen() 18458 MB/s Jan 13 20:54:02.669754 kernel: raid6: avx2x1 gen() 14532 MB/s Jan 13 20:54:02.669838 kernel: raid6: using algorithm avx2x2 gen() 18458 MB/s Jan 13 20:54:02.687625 kernel: raid6: .... xor() 17597 MB/s, rmw enabled Jan 13 20:54:02.687734 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:54:02.711104 kernel: xor: automatically using best checksumming function avx Jan 13 20:54:02.891095 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:54:02.905182 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:54:02.913294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:54:02.946599 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 13 20:54:02.953876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:54:02.984298 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:54:03.027250 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 13 20:54:03.068266 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:54:03.073478 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:54:03.186849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:54:03.205320 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:54:03.260232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:54:03.289258 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:54:03.288847 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:54:03.305103 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 20:54:03.315093 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:54:03.321696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:54:03.342350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:54:03.370804 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:54:03.370880 kernel: AES CTR mode by8 optimization enabled Jan 13 20:54:03.378430 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:54:03.434919 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 20:54:03.507796 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 20:54:03.508043 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 20:54:03.508267 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 20:54:03.508420 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:54:03.508582 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:54:03.508599 kernel: GPT:17805311 != 25165823 Jan 13 20:54:03.508613 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:54:03.508628 kernel: GPT:17805311 != 25165823 Jan 13 20:54:03.508641 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:54:03.508656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:54:03.508671 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 20:54:03.441700 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:54:03.441898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:03.462463 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:54:03.509018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:54:03.575456 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (450) Jan 13 20:54:03.509350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:03.606887 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (453) Jan 13 20:54:03.531415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:54:03.604526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:54:03.617977 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:54:03.654465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:03.668760 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 20:54:03.696840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 20:54:03.703471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 20:54:03.732810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 20:54:03.748242 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 20:54:03.779319 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:54:03.795810 disk-uuid[539]: Primary Header is updated. Jan 13 20:54:03.795810 disk-uuid[539]: Secondary Entries is updated. Jan 13 20:54:03.795810 disk-uuid[539]: Secondary Header is updated. Jan 13 20:54:03.833215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:54:03.814423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:54:03.910154 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:04.842023 disk-uuid[540]: The operation has completed successfully. Jan 13 20:54:04.851315 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:54:04.926413 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:54:04.926566 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:54:04.945307 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:54:04.976457 sh[563]: Success Jan 13 20:54:05.000089 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:54:05.089536 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:54:05.096911 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:54:05.121693 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:54:05.171098 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:54:05.171190 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:05.171217 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:54:05.180522 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:54:05.187353 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:54:05.217096 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:54:05.224225 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:54:05.225619 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:54:05.230309 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:54:05.243312 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:54:05.316159 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:05.316215 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:05.316240 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:54:05.335326 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:54:05.335417 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:54:05.350720 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:54:05.365265 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:05.376969 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:54:05.403326 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:54:05.459264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:54:05.488526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:54:05.541315 systemd-networkd[746]: lo: Link UP Jan 13 20:54:05.541700 systemd-networkd[746]: lo: Gained carrier Jan 13 20:54:05.547544 systemd-networkd[746]: Enumeration completed Jan 13 20:54:05.549161 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:54:05.549195 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:54:05.549202 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:54:05.558090 systemd-networkd[746]: eth0: Link UP Jan 13 20:54:05.558097 systemd-networkd[746]: eth0: Gained carrier Jan 13 20:54:05.604307 ignition[692]: Ignition 2.20.0 Jan 13 20:54:05.558115 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:54:05.604317 ignition[692]: Stage: fetch-offline Jan 13 20:54:05.570327 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.13/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 20:54:05.604372 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.622694 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:54:05.604383 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.640011 systemd[1]: Reached target network.target - Network. Jan 13 20:54:05.604502 ignition[692]: parsed url from cmdline: "" Jan 13 20:54:05.672719 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:54:05.604509 ignition[692]: no config URL provided Jan 13 20:54:05.719997 unknown[754]: fetched base config from "system" Jan 13 20:54:05.604518 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:54:05.720023 unknown[754]: fetched base config from "system" Jan 13 20:54:05.604532 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:54:05.720042 unknown[754]: fetched user config from "gcp" Jan 13 20:54:05.604540 ignition[692]: failed to fetch config: resource requires networking Jan 13 20:54:05.724291 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:54:05.604814 ignition[692]: Ignition finished successfully Jan 13 20:54:05.747304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:54:05.703939 ignition[754]: Ignition 2.20.0 Jan 13 20:54:05.784768 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:54:05.703949 ignition[754]: Stage: fetch Jan 13 20:54:05.806316 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:54:05.704251 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.843628 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:54:05.704264 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.850598 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:54:05.704385 ignition[754]: parsed url from cmdline: "" Jan 13 20:54:05.878392 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:54:05.704392 ignition[754]: no config URL provided Jan 13 20:54:05.884443 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:54:05.704402 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:54:05.912432 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:54:05.704416 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:54:05.918491 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:54:05.704451 ignition[754]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 20:54:05.942415 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:54:05.710505 ignition[754]: GET result: OK Jan 13 20:54:05.710590 ignition[754]: parsing config with SHA512: e690b5ccdbd11a035e54fd96aa680d44ad10cef6ef124b0165ab7e0c0ffa8556d6085df905b282dac04a6c18b616b67dba4bf49d8fd7640ac40e9c6dd1aeb2dd Jan 13 20:54:05.722384 ignition[754]: fetch: fetch complete Jan 13 20:54:05.722402 ignition[754]: fetch: fetch passed Jan 13 20:54:05.722502 ignition[754]: Ignition finished successfully Jan 13 20:54:05.782304 ignition[760]: Ignition 2.20.0 Jan 13 20:54:05.782314 ignition[760]: Stage: kargs Jan 13 20:54:05.782510 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.782522 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.783617 ignition[760]: kargs: kargs passed Jan 13 20:54:05.783681 ignition[760]: Ignition finished successfully Jan 13 20:54:05.831532 ignition[766]: Ignition 2.20.0 Jan 13 20:54:05.831563 ignition[766]: Stage: disks Jan 13 20:54:05.831767 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:05.831779 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:05.832763 ignition[766]: disks: disks passed Jan 13 20:54:05.832819 ignition[766]: Ignition finished successfully Jan 13 20:54:06.006786 systemd-fsck[775]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:54:06.148210 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:54:06.153276 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:54:06.312531 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:54:06.313484 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:54:06.314419 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:54:06.334368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:54:06.364249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:54:06.412270 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (783) Jan 13 20:54:06.412347 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:06.412373 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:06.412396 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:54:06.403976 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:54:06.448262 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:54:06.448317 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:54:06.404039 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:54:06.404096 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:54:06.429926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:54:06.475844 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:54:06.499320 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:54:06.614209 initrd-setup-root[807]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:54:06.624244 initrd-setup-root[814]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:54:06.632835 initrd-setup-root[821]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:54:06.643255 initrd-setup-root[828]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:54:06.784999 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:54:06.813275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:54:06.816322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:54:06.837132 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:06.853564 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:54:06.901347 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:54:06.911753 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:54:06.937428 ignition[896]: INFO : Ignition 2.20.0 Jan 13 20:54:06.937428 ignition[896]: INFO : Stage: mount Jan 13 20:54:06.937428 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:06.937428 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:06.937428 ignition[896]: INFO : mount: mount passed Jan 13 20:54:06.937428 ignition[896]: INFO : Ignition finished successfully Jan 13 20:54:06.935273 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:54:07.326359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:54:07.372765 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (908) Jan 13 20:54:07.372813 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:54:07.372937 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:54:07.373006 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:54:07.388482 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:54:07.388581 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:54:07.391523 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:54:07.432704 ignition[925]: INFO : Ignition 2.20.0 Jan 13 20:54:07.432704 ignition[925]: INFO : Stage: files Jan 13 20:54:07.448214 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:07.448214 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:07.448214 ignition[925]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:54:07.448214 ignition[925]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:54:07.448214 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:54:07.448214 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:54:07.442274 unknown[925]: wrote ssh authorized keys file for user: core Jan 13 20:54:07.549245 systemd-networkd[746]: eth0: Gained IPv6LL Jan 13 20:54:07.593195 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:54:07.817986 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:54:07.835442 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:54:07.835442 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:54:08.091219 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:54:08.232834 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.249251 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:54:08.487341 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:54:08.713957 ignition[925]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:54:08.713957 ignition[925]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:54:08.752284 ignition[925]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:54:08.752284 ignition[925]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:54:08.752284 ignition[925]: INFO : files: files passed Jan 13 20:54:08.752284 ignition[925]: INFO : Ignition finished successfully Jan 13 20:54:08.718127 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:54:08.738374 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:54:08.776374 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:54:08.827755 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:54:08.968284 initrd-setup-root-after-ignition[952]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:54:08.968284 initrd-setup-root-after-ignition[952]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:54:08.827932 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:54:09.027280 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:54:08.841302 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:54:08.852618 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:54:08.882322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:54:08.968388 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:54:08.968682 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:54:08.982679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:54:09.017412 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:54:09.037486 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:54:09.044617 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:54:09.085310 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:54:09.113333 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:54:09.149312 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:54:09.149431 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:54:09.168321 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:54:09.191401 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:54:09.202670 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:54:09.222575 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:54:09.222675 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:54:09.258681 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:54:09.278521 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:54:09.311357 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:54:09.320496 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:54:09.340626 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:54:09.358504 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:54:09.393275 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:54:09.403517 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:54:09.423452 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:54:09.440484 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:54:09.474350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:54:09.474474 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:54:09.500532 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:54:09.527443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:54:09.537481 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:54:09.537584 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:54:09.557479 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:54:09.557565 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:54:09.596560 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:54:09.596659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:54:09.605511 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:54:09.605585 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:54:09.672381 ignition[978]: INFO : Ignition 2.20.0 Jan 13 20:54:09.672381 ignition[978]: INFO : Stage: umount Jan 13 20:54:09.672381 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:54:09.672381 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:54:09.672381 ignition[978]: INFO : umount: umount passed Jan 13 20:54:09.672381 ignition[978]: INFO : Ignition finished successfully Jan 13 20:54:09.632327 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:54:09.680239 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:54:09.680367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:54:09.706252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:54:09.743384 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:54:09.743506 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:54:09.751563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:54:09.751638 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:54:09.792091 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:54:09.792891 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:54:09.793014 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:54:09.812717 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:54:09.812844 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:54:09.821875 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:54:09.821949 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:54:09.837517 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:54:09.837594 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:54:09.855553 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:54:09.855622 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:54:09.872519 systemd[1]: Stopped target network.target - Network. Jan 13 20:54:09.889480 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:54:09.889569 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:54:09.905519 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:54:09.932257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:54:09.935268 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:54:09.940438 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:54:09.958495 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:54:09.973542 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:54:09.973606 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:54:09.988547 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:54:09.988611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:54:10.022445 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:54:10.022533 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:54:10.030528 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:54:10.030602 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:54:10.064457 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:54:10.064538 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:54:10.072749 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:54:10.078181 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 13 20:54:10.100615 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:54:10.120781 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:54:10.120927 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:54:10.140774 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:54:10.141105 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:54:10.150528 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:54:10.150588 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:54:10.176222 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:54:10.203211 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:54:10.203336 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:54:10.221402 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:54:10.221494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:54:10.239317 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:54:10.239414 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:54:10.257477 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:54:10.257602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:54:10.278497 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:54:10.298810 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:54:10.663244 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:54:10.299005 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:54:10.313586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:54:10.313654 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:54:10.334437 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:54:10.334496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:54:10.354492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:54:10.354569 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:54:10.390461 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:54:10.390567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:54:10.416500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:54:10.416586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:54:10.451324 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:54:10.463410 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:54:10.463494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:54:10.480520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:54:10.480594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:10.511962 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:54:10.512116 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:54:10.531622 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:54:10.531744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:54:10.553560 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:54:10.569428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:54:10.613156 systemd[1]: Switching root. Jan 13 20:54:10.890207 systemd-journald[184]: Journal stopped Jan 13 20:54:13.468576 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:54:13.468645 kernel: SELinux: policy capability open_perms=1 Jan 13 20:54:13.468669 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:54:13.468689 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:54:13.468707 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:54:13.468725 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:54:13.468747 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:54:13.468781 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:54:13.468799 kernel: audit: type=1403 audit(1736801651.359:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:54:13.468822 systemd[1]: Successfully loaded SELinux policy in 90.588ms. Jan 13 20:54:13.468844 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.067ms. Jan 13 20:54:13.468865 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:54:13.468884 systemd[1]: Detected virtualization google. Jan 13 20:54:13.468904 systemd[1]: Detected architecture x86-64. Jan 13 20:54:13.468929 systemd[1]: Detected first boot. Jan 13 20:54:13.468951 systemd[1]: Initializing machine ID from random generator. Jan 13 20:54:13.468971 zram_generator::config[1018]: No configuration found. Jan 13 20:54:13.468994 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:54:13.469014 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:54:13.469038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:54:13.469100 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:54:13.469122 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:54:13.469143 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:54:13.469163 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:54:13.469185 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:54:13.469206 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:54:13.469233 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:54:13.469254 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:54:13.469276 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:54:13.469297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:54:13.469318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:54:13.469339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:54:13.469361 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:54:13.469384 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:54:13.469410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:54:13.469435 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:54:13.469458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:54:13.469479 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:54:13.469500 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:54:13.469521 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:54:13.469549 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:54:13.469571 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:54:13.469593 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:54:13.469620 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:54:13.469642 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:54:13.469663 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:54:13.469685 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:54:13.469708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:54:13.469731 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:54:13.469753 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:54:13.469787 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:54:13.469810 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:54:13.469833 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:54:13.469856 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:54:13.469879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:54:13.469905 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:54:13.469931 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:54:13.469954 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:54:13.469978 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:54:13.470001 systemd[1]: Reached target machines.target - Containers. Jan 13 20:54:13.470024 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:54:13.470048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:54:13.470094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:54:13.470123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:54:13.470146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:54:13.470166 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:54:13.470186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:54:13.470209 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:54:13.470234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:54:13.470258 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:54:13.470281 kernel: ACPI: bus type drm_connector registered Jan 13 20:54:13.470307 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:54:13.470331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:54:13.470354 kernel: loop: module loaded Jan 13 20:54:13.470375 kernel: fuse: init (API version 7.39) Jan 13 20:54:13.470397 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:54:13.470422 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:54:13.470448 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:54:13.470472 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:54:13.470496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:54:13.470571 systemd-journald[1105]: Collecting audit messages is disabled. Jan 13 20:54:13.470618 systemd-journald[1105]: Journal started Jan 13 20:54:13.470665 systemd-journald[1105]: Runtime Journal (/run/log/journal/b970cf958985416abaaa9249a9751411) is 8.0M, max 148.7M, 140.7M free. Jan 13 20:54:13.471971 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:54:12.251695 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:54:12.272030 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:54:12.272615 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:54:13.506149 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:54:13.523121 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:54:13.523218 systemd[1]: Stopped verity-setup.service. Jan 13 20:54:13.553119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:54:13.564149 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:54:13.575702 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:54:13.585460 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:54:13.595470 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:54:13.605446 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:54:13.616481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:54:13.626462 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:54:13.636633 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:54:13.648676 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:54:13.660629 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:54:13.660879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:54:13.672634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:54:13.672864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:54:13.685637 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:54:13.685877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:54:13.696616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:54:13.696854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:54:13.708818 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:54:13.709120 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:54:13.719686 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:54:13.719934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:54:13.730629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:54:13.740626 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:54:13.752629 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:54:13.764630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:54:13.791602 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:54:13.806244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:54:13.829247 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:54:13.839270 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:54:13.839509 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:54:13.852349 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:54:13.875383 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:54:13.888709 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:54:13.898436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:54:13.903390 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:54:13.918923 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:54:13.930295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:54:13.938282 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:54:13.947638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:54:13.952167 systemd-journald[1105]: Time spent on flushing to /var/log/journal/b970cf958985416abaaa9249a9751411 is 65.071ms for 932 entries. Jan 13 20:54:13.952167 systemd-journald[1105]: System Journal (/var/log/journal/b970cf958985416abaaa9249a9751411) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:54:14.076350 systemd-journald[1105]: Received client request to flush runtime journal. Jan 13 20:54:13.972379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:54:13.990233 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:54:14.010380 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:54:14.029822 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:54:14.048585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:54:14.060430 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:54:14.072638 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:54:14.085103 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:54:14.103453 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:54:14.104224 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:54:14.128247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:54:14.143817 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:54:14.143877 udevadm[1138]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:54:14.165218 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:54:14.191605 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:54:14.201126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:54:14.224246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:54:14.241088 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 20:54:14.245733 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:54:14.249726 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:54:14.308232 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 13 20:54:14.308268 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 13 20:54:14.329005 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:54:14.379119 kernel: loop2: detected capacity change from 0 to 140992 Jan 13 20:54:14.480458 kernel: loop3: detected capacity change from 0 to 52056 Jan 13 20:54:14.549165 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:54:14.619115 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 20:54:14.658903 kernel: loop6: detected capacity change from 0 to 140992 Jan 13 20:54:14.717093 kernel: loop7: detected capacity change from 0 to 52056 Jan 13 20:54:14.754644 (sd-merge)[1161]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 13 20:54:14.756800 (sd-merge)[1161]: Merged extensions into '/usr'. Jan 13 20:54:14.763791 systemd[1]: Reloading requested from client PID 1136 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:54:14.765951 systemd[1]: Reloading... Jan 13 20:54:14.938140 zram_generator::config[1187]: No configuration found. Jan 13 20:54:15.060633 ldconfig[1131]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:54:15.205976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:54:15.311408 systemd[1]: Reloading finished in 544 ms. Jan 13 20:54:15.345397 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:54:15.355751 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:54:15.381341 systemd[1]: Starting ensure-sysext.service... Jan 13 20:54:15.395465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:54:15.407894 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:54:15.434217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:54:15.436481 systemd-tmpfiles[1228]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:54:15.437134 systemd-tmpfiles[1228]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:54:15.438414 systemd-tmpfiles[1228]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:54:15.438875 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 13 20:54:15.438998 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 13 20:54:15.443823 systemd-tmpfiles[1228]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:54:15.443843 systemd-tmpfiles[1228]: Skipping /boot Jan 13 20:54:15.445582 systemd[1]: Reloading requested from client PID 1227 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:54:15.445827 systemd[1]: Reloading... Jan 13 20:54:15.474737 systemd-tmpfiles[1228]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:54:15.474764 systemd-tmpfiles[1228]: Skipping /boot Jan 13 20:54:15.558184 systemd-udevd[1231]: Using default interface naming scheme 'v255'. Jan 13 20:54:15.575085 zram_generator::config[1255]: No configuration found. Jan 13 20:54:15.917091 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1310) Jan 13 20:54:15.919823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:54:15.974109 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 13 20:54:16.067349 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:54:16.067399 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:54:16.083085 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:54:16.090127 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:54:16.092298 systemd[1]: Reloading finished in 645 ms. Jan 13 20:54:16.114778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:54:16.141184 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 13 20:54:16.140918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:54:16.157103 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:54:16.176135 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:54:16.176255 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:54:16.200264 systemd[1]: Finished ensure-sysext.service. Jan 13 20:54:16.210769 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:54:16.245396 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 20:54:16.256467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:54:16.263337 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:54:16.284333 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:54:16.295605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:54:16.300386 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:54:16.316864 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:54:16.340009 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:54:16.355465 lvm[1342]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:54:16.360779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:54:16.374648 augenrules[1356]: No rules Jan 13 20:54:16.378271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:54:16.396730 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:54:16.405389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:54:16.413645 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:54:16.434279 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:54:16.454987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:54:16.467302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:54:16.475805 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:54:16.494356 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:54:16.512000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:54:16.522242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:54:16.523784 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:54:16.524160 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:54:16.534912 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:54:16.546706 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:54:16.547421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:54:16.547654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:54:16.548146 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:54:16.548463 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:54:16.549286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:54:16.549500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:54:16.550204 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:54:16.550455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:54:16.555611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:54:16.556101 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:54:16.570155 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:54:16.578297 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:54:16.583334 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:54:16.586093 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 13 20:54:16.586207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:54:16.586306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:54:16.592293 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:54:16.596554 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:54:16.596810 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:54:16.597609 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:54:16.617480 lvm[1384]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:54:16.654866 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:54:16.665314 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:54:16.689675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:54:16.701780 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 13 20:54:16.716456 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:54:16.815454 systemd-networkd[1366]: lo: Link UP Jan 13 20:54:16.815474 systemd-networkd[1366]: lo: Gained carrier Jan 13 20:54:16.817926 systemd-networkd[1366]: Enumeration completed Jan 13 20:54:16.818639 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:54:16.818657 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:54:16.819239 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:54:16.819931 systemd-networkd[1366]: eth0: Link UP Jan 13 20:54:16.819949 systemd-networkd[1366]: eth0: Gained carrier Jan 13 20:54:16.819976 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:54:16.825157 systemd-resolved[1367]: Positive Trust Anchors: Jan 13 20:54:16.825222 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:54:16.825286 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:54:16.833158 systemd-networkd[1366]: eth0: DHCPv4 address 10.128.0.13/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 20:54:16.835205 systemd-resolved[1367]: Defaulting to hostname 'linux'. Jan 13 20:54:16.838397 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:54:16.850423 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:54:16.860381 systemd[1]: Reached target network.target - Network. Jan 13 20:54:16.870222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:54:16.881261 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:54:16.891368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:54:16.902353 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:54:16.913513 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:54:16.923521 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:54:16.935269 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:54:16.946264 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:54:16.946334 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:54:16.955227 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:54:16.965012 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:54:16.977049 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:54:16.995103 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:54:17.006186 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:54:17.016684 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:54:17.026266 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:54:17.035348 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:54:17.035394 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:54:17.047262 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:54:17.062339 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:54:17.083175 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:54:17.108258 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:54:17.132526 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:54:17.143100 jq[1417]: false Jan 13 20:54:17.142209 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:54:17.150332 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:54:17.171338 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:54:17.175094 extend-filesystems[1418]: Found loop4 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found loop5 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found loop6 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found loop7 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda1 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda2 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda3 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found usr Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda4 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda6 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda7 Jan 13 20:54:17.175094 extend-filesystems[1418]: Found sda9 Jan 13 20:54:17.175094 extend-filesystems[1418]: Checking size of /dev/sda9 Jan 13 20:54:17.393449 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 13 20:54:17.393526 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 13 20:54:17.393559 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1310) Jan 13 20:54:17.262282 dbus-daemon[1416]: [system] SELinux support is enabled Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.233 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.235 INFO Fetch successful Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.235 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.235 INFO Fetch successful Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.235 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.236 INFO Fetch successful Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.236 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 13 20:54:17.394124 coreos-metadata[1415]: Jan 13 20:54:17.238 INFO Fetch successful Jan 13 20:54:17.187215 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:54:17.394683 extend-filesystems[1418]: Resized partition /dev/sda9 Jan 13 20:54:17.267726 dbus-daemon[1416]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1366 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:54:17.228397 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:54:17.405347 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:54:17.405347 extend-filesystems[1435]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:54:17.405347 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 20:54:17.405347 extend-filesystems[1435]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 13 20:54:17.363115 ntpd[1423]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: ---------------------------------------------------- Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: corporation. Support and training for ntp-4 are Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: available at https://www.nwtime.org/support Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: ---------------------------------------------------- Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: proto: precision = 0.073 usec (-24) Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: basedate set to 2025-01-01 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: gps base set to 2025-01-05 (week 2348) Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Listen normally on 3 eth0 10.128.0.13:123 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Listen normally on 4 lo [::1]:123 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: bind(21) AF_INET6 fe80::4001:aff:fe80:d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:d%2#123 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: failed to init interface for address fe80::4001:aff:fe80:d%2 Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: Listening on routing socket on fd #21 for interface updates Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:54:17.474694 ntpd[1423]: 13 Jan 20:54:17 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:54:17.264358 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:54:17.481376 extend-filesystems[1418]: Resized filesystem in /dev/sda9 Jan 13 20:54:17.363162 ntpd[1423]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:54:17.301317 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:54:17.363178 ntpd[1423]: ---------------------------------------------------- Jan 13 20:54:17.316814 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 13 20:54:17.363192 ntpd[1423]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:54:17.503329 update_engine[1442]: I20250113 20:54:17.374684 1442 main.cc:92] Flatcar Update Engine starting Jan 13 20:54:17.503329 update_engine[1442]: I20250113 20:54:17.378643 1442 update_check_scheduler.cc:74] Next update check in 9m56s Jan 13 20:54:17.318403 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:54:17.363239 ntpd[1423]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:54:17.327326 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:54:17.363254 ntpd[1423]: corporation. Support and training for ntp-4 are Jan 13 20:54:17.339085 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:54:17.363268 ntpd[1423]: available at https://www.nwtime.org/support Jan 13 20:54:17.518051 jq[1446]: true Jan 13 20:54:17.356049 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:54:17.363283 ntpd[1423]: ---------------------------------------------------- Jan 13 20:54:17.382645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:54:17.380030 ntpd[1423]: proto: precision = 0.073 usec (-24) Jan 13 20:54:17.383161 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:54:17.384739 ntpd[1423]: basedate set to 2025-01-01 Jan 13 20:54:17.383653 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:54:17.384769 ntpd[1423]: gps base set to 2025-01-05 (week 2348) Jan 13 20:54:17.385351 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:54:17.392966 ntpd[1423]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:54:17.417032 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:54:17.393039 ntpd[1423]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:54:17.421336 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:54:17.396402 ntpd[1423]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:54:17.468651 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:54:17.396464 ntpd[1423]: Listen normally on 3 eth0 10.128.0.13:123 Jan 13 20:54:17.468906 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:54:17.396522 ntpd[1423]: Listen normally on 4 lo [::1]:123 Jan 13 20:54:17.489083 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:54:17.396593 ntpd[1423]: bind(21) AF_INET6 fe80::4001:aff:fe80:d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:54:17.489122 systemd-logind[1441]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 20:54:17.396623 ntpd[1423]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:d%2#123 Jan 13 20:54:17.489152 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:54:17.396644 ntpd[1423]: failed to init interface for address fe80::4001:aff:fe80:d%2 Jan 13 20:54:17.490745 systemd-logind[1441]: New seat seat0. Jan 13 20:54:17.396688 ntpd[1423]: Listening on routing socket on fd #21 for interface updates Jan 13 20:54:17.519638 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:54:17.402266 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:54:17.402306 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:54:17.556655 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:54:17.565127 jq[1453]: true Jan 13 20:54:17.575948 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:54:17.599473 dbus-daemon[1416]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:54:17.642449 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:54:17.662146 tar[1452]: linux-amd64/helm Jan 13 20:54:17.662674 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:54:17.674096 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:54:17.675307 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:54:17.675584 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:54:17.697460 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:54:17.707262 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:54:17.707566 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:54:17.736499 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:54:17.782783 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:54:17.782411 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:54:17.813457 systemd[1]: Starting sshkeys.service... Jan 13 20:54:17.866320 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:54:17.887589 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:54:17.984526 coreos-metadata[1489]: Jan 13 20:54:17.983 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 13 20:54:17.988967 coreos-metadata[1489]: Jan 13 20:54:17.988 INFO Fetch failed with 404: resource not found Jan 13 20:54:17.988967 coreos-metadata[1489]: Jan 13 20:54:17.988 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 13 20:54:17.994622 coreos-metadata[1489]: Jan 13 20:54:17.994 INFO Fetch successful Jan 13 20:54:17.999671 coreos-metadata[1489]: Jan 13 20:54:17.997 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 13 20:54:17.999671 coreos-metadata[1489]: Jan 13 20:54:17.997 INFO Fetch failed with 404: resource not found Jan 13 20:54:17.999671 coreos-metadata[1489]: Jan 13 20:54:17.997 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 13 20:54:18.001342 coreos-metadata[1489]: Jan 13 20:54:18.001 INFO Fetch failed with 404: resource not found Jan 13 20:54:18.001342 coreos-metadata[1489]: Jan 13 20:54:18.001 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 13 20:54:18.006867 coreos-metadata[1489]: Jan 13 20:54:18.006 INFO Fetch successful Jan 13 20:54:18.012404 unknown[1489]: wrote ssh authorized keys file for user: core Jan 13 20:54:18.079765 update-ssh-keys[1497]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:54:18.083214 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:54:18.101136 systemd[1]: Finished sshkeys.service. Jan 13 20:54:18.114785 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:54:18.152242 dbus-daemon[1416]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:54:18.152509 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:54:18.153409 dbus-daemon[1416]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1481 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:54:18.174470 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 13 20:54:18.177176 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:54:18.186396 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:54:18.200133 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:54:18.220394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:54:18.236500 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:54:18.240553 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 13 20:54:18.275955 init.sh[1508]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 13 20:54:18.288207 init.sh[1508]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 13 20:54:18.288207 init.sh[1508]: + /usr/bin/google_instance_setup Jan 13 20:54:18.376420 polkitd[1504]: Started polkitd version 121 Jan 13 20:54:18.409517 polkitd[1504]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:54:18.409623 polkitd[1504]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:54:18.410348 polkitd[1504]: Finished loading, compiling and executing 2 rules Jan 13 20:54:18.414123 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:54:18.415267 dbus-daemon[1416]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:54:18.416252 polkitd[1504]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:54:18.426238 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:54:18.517098 systemd-hostnamed[1481]: Hostname set to (transient) Jan 13 20:54:18.522038 systemd-resolved[1367]: System hostname changed to 'ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal'. Jan 13 20:54:18.592495 containerd[1454]: time="2025-01-13T20:54:18.590216912Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:54:18.614074 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:54:18.688149 containerd[1454]: time="2025-01-13T20:54:18.687126944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692000 containerd[1454]: time="2025-01-13T20:54:18.691418919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692000 containerd[1454]: time="2025-01-13T20:54:18.691472943Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:54:18.692000 containerd[1454]: time="2025-01-13T20:54:18.691500466Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:54:18.692000 containerd[1454]: time="2025-01-13T20:54:18.691750422Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:54:18.692000 containerd[1454]: time="2025-01-13T20:54:18.691780829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692000 containerd[1454]: time="2025-01-13T20:54:18.691869375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692000 containerd[1454]: time="2025-01-13T20:54:18.691888623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692379 containerd[1454]: time="2025-01-13T20:54:18.692209230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692379 containerd[1454]: time="2025-01-13T20:54:18.692236790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692379 containerd[1454]: time="2025-01-13T20:54:18.692262625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692379 containerd[1454]: time="2025-01-13T20:54:18.692283914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:54:18.692565 containerd[1454]: time="2025-01-13T20:54:18.692404513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:54:18.693825 containerd[1454]: time="2025-01-13T20:54:18.692724809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:54:18.693825 containerd[1454]: time="2025-01-13T20:54:18.692925842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:54:18.693825 containerd[1454]: time="2025-01-13T20:54:18.692950876Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:54:18.693992 containerd[1454]: time="2025-01-13T20:54:18.693885061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:54:18.694074 containerd[1454]: time="2025-01-13T20:54:18.693991164Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.701807295Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.701899257Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.701926523Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.701953814Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.701979593Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702203756Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702560382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702749231Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702778843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702807220Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702830937Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702854218Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702876175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.704442 containerd[1454]: time="2025-01-13T20:54:18.702902626Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.702938861Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.702978817Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703002640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703023304Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703080155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703106239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703127257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703149496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703169800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703200503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703221187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703244840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703268136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705145 containerd[1454]: time="2025-01-13T20:54:18.703292596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703312307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703332099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703356254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703381912Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703416077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703438371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703457991Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703521370Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703549499Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703567621Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703588847Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703607099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703636325Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:54:18.705736 containerd[1454]: time="2025-01-13T20:54:18.703654676Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:54:18.710311 containerd[1454]: time="2025-01-13T20:54:18.703672153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.706414289Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.706505245Z" level=info msg="Connect containerd service" Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.706564056Z" level=info msg="using legacy CRI server" Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.706579271Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.706750735Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.708641864Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.709365332Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:54:18.710367 containerd[1454]: time="2025-01-13T20:54:18.709442324Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:54:18.713804 containerd[1454]: time="2025-01-13T20:54:18.711707585Z" level=info msg="Start subscribing containerd event" Jan 13 20:54:18.713804 containerd[1454]: time="2025-01-13T20:54:18.711782735Z" level=info msg="Start recovering state" Jan 13 20:54:18.713804 containerd[1454]: time="2025-01-13T20:54:18.711876666Z" level=info msg="Start event monitor" Jan 13 20:54:18.715089 containerd[1454]: time="2025-01-13T20:54:18.711906366Z" level=info msg="Start snapshots syncer" Jan 13 20:54:18.715089 containerd[1454]: time="2025-01-13T20:54:18.714218493Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:54:18.715089 containerd[1454]: time="2025-01-13T20:54:18.714242447Z" level=info msg="Start streaming server" Jan 13 20:54:18.718268 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:54:18.719642 containerd[1454]: time="2025-01-13T20:54:18.718565785Z" level=info msg="containerd successfully booted in 0.132204s" Jan 13 20:54:18.735714 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:54:18.757901 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:54:18.775476 systemd[1]: Started sshd@0-10.128.0.13:22-147.75.109.163:40146.service - OpenSSH per-connection server daemon (147.75.109.163:40146). Jan 13 20:54:18.802753 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:54:18.803037 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:54:18.821763 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:54:18.878271 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:54:18.899534 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:54:18.914561 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:54:18.925667 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:54:19.242110 sshd[1538]: Accepted publickey for core from 147.75.109.163 port 40146 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:54:19.245527 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:54:19.265581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:54:19.284533 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:54:19.310168 systemd-logind[1441]: New session 1 of user core. Jan 13 20:54:19.342157 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:54:19.367730 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:54:19.399029 instance-setup[1512]: INFO Running google_set_multiqueue. Jan 13 20:54:19.406542 tar[1452]: linux-amd64/LICENSE Jan 13 20:54:19.408370 tar[1452]: linux-amd64/README.md Jan 13 20:54:19.409229 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:54:19.450156 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:54:19.452217 instance-setup[1512]: INFO Set channels for eth0 to 2. Jan 13 20:54:19.464670 instance-setup[1512]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 13 20:54:19.469115 instance-setup[1512]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 13 20:54:19.471191 instance-setup[1512]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 13 20:54:19.474837 instance-setup[1512]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 13 20:54:19.475157 instance-setup[1512]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 13 20:54:19.478251 instance-setup[1512]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 13 20:54:19.478485 instance-setup[1512]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 13 20:54:19.480889 instance-setup[1512]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 13 20:54:19.492743 instance-setup[1512]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 20:54:19.499521 instance-setup[1512]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 20:54:19.501970 instance-setup[1512]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 13 20:54:19.502021 instance-setup[1512]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 13 20:54:19.555132 init.sh[1508]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 13 20:54:19.659921 systemd[1552]: Queued start job for default target default.target. Jan 13 20:54:19.666749 systemd[1552]: Created slice app.slice - User Application Slice. Jan 13 20:54:19.666802 systemd[1552]: Reached target paths.target - Paths. Jan 13 20:54:19.666829 systemd[1552]: Reached target timers.target - Timers. Jan 13 20:54:19.673294 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:54:19.707467 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:54:19.707667 systemd[1552]: Reached target sockets.target - Sockets. Jan 13 20:54:19.707707 systemd[1552]: Reached target basic.target - Basic System. Jan 13 20:54:19.707785 systemd[1552]: Reached target default.target - Main User Target. Jan 13 20:54:19.707841 systemd[1552]: Startup finished in 273ms. Jan 13 20:54:19.707971 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:54:19.725321 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:54:19.784485 startup-script[1588]: INFO Starting startup scripts. Jan 13 20:54:19.791014 startup-script[1588]: INFO No startup scripts found in metadata. Jan 13 20:54:19.791129 startup-script[1588]: INFO Finished running startup scripts. Jan 13 20:54:19.815236 init.sh[1508]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 13 20:54:19.815236 init.sh[1508]: + daemon_pids=() Jan 13 20:54:19.815497 init.sh[1508]: + for d in accounts clock_skew network Jan 13 20:54:19.815963 init.sh[1508]: + daemon_pids+=($!) Jan 13 20:54:19.815963 init.sh[1508]: + for d in accounts clock_skew network Jan 13 20:54:19.816101 init.sh[1594]: + /usr/bin/google_accounts_daemon Jan 13 20:54:19.816451 init.sh[1508]: + daemon_pids+=($!) Jan 13 20:54:19.816451 init.sh[1508]: + for d in accounts clock_skew network Jan 13 20:54:19.818086 init.sh[1595]: + /usr/bin/google_clock_skew_daemon Jan 13 20:54:19.818419 init.sh[1508]: + daemon_pids+=($!) Jan 13 20:54:19.818472 init.sh[1508]: + NOTIFY_SOCKET=/run/systemd/notify Jan 13 20:54:19.818472 init.sh[1508]: + /usr/bin/systemd-notify --ready Jan 13 20:54:19.821796 init.sh[1596]: + /usr/bin/google_network_daemon Jan 13 20:54:19.843904 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 13 20:54:19.859582 init.sh[1508]: + wait -n 1594 1595 1596 Jan 13 20:54:19.983498 systemd[1]: Started sshd@1-10.128.0.13:22-147.75.109.163:51754.service - OpenSSH per-connection server daemon (147.75.109.163:51754). Jan 13 20:54:20.356304 google-clock-skew[1595]: INFO Starting Google Clock Skew daemon. Jan 13 20:54:20.363110 google-networking[1596]: INFO Starting Google Networking daemon. Jan 13 20:54:20.366886 ntpd[1423]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:d%2]:123 Jan 13 20:54:20.367403 ntpd[1423]: 13 Jan 20:54:20 ntpd[1423]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:d%2]:123 Jan 13 20:54:20.387155 google-clock-skew[1595]: INFO Clock drift token has changed: 0. Jan 13 20:54:20.388865 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 51754 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:54:20.388631 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:54:20.402175 systemd-logind[1441]: New session 2 of user core. Jan 13 20:54:20.422396 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:54:20.440353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:54:20.461949 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:54:20.464378 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:54:20.474486 systemd[1]: Startup finished in 1.076s (kernel) + 9.576s (initrd) + 9.195s (userspace) = 19.847s. Jan 13 20:54:20.524629 groupadd[1617]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 13 20:54:20.529052 groupadd[1617]: group added to /etc/gshadow: name=google-sudoers Jan 13 20:54:20.579947 groupadd[1617]: new group: name=google-sudoers, GID=1000 Jan 13 20:54:20.618334 google-accounts[1594]: INFO Starting Google Accounts daemon. Jan 13 20:54:20.631698 google-accounts[1594]: WARNING OS Login not installed. Jan 13 20:54:20.633512 google-accounts[1594]: INFO Creating a new user account for 0. Jan 13 20:54:20.639519 init.sh[1630]: useradd: invalid user name '0': use --badname to ignore Jan 13 20:54:20.639894 google-accounts[1594]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 13 20:54:20.660033 sshd[1616]: Connection closed by 147.75.109.163 port 51754 Jan 13 20:54:20.662353 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Jan 13 20:54:20.667220 systemd[1]: sshd@1-10.128.0.13:22-147.75.109.163:51754.service: Deactivated successfully. Jan 13 20:54:20.670951 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:54:20.674270 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:54:20.675779 systemd-logind[1441]: Removed session 2. Jan 13 20:54:20.713159 systemd[1]: Started sshd@2-10.128.0.13:22-147.75.109.163:51766.service - OpenSSH per-connection server daemon (147.75.109.163:51766). Jan 13 20:54:21.000958 google-clock-skew[1595]: INFO Synced system time with hardware clock. Jan 13 20:54:21.003693 systemd-resolved[1367]: Clock change detected. Flushing caches. Jan 13 20:54:21.088361 sshd[1639]: Accepted publickey for core from 147.75.109.163 port 51766 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:54:21.090895 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:54:21.097579 systemd-logind[1441]: New session 3 of user core. Jan 13 20:54:21.102652 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:54:21.300003 sshd[1641]: Connection closed by 147.75.109.163 port 51766 Jan 13 20:54:21.300792 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jan 13 20:54:21.308529 systemd[1]: sshd@2-10.128.0.13:22-147.75.109.163:51766.service: Deactivated successfully. Jan 13 20:54:21.311465 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:54:21.312482 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:54:21.314202 systemd-logind[1441]: Removed session 3. Jan 13 20:54:21.355005 systemd[1]: Started sshd@3-10.128.0.13:22-147.75.109.163:51774.service - OpenSSH per-connection server daemon (147.75.109.163:51774). Jan 13 20:54:21.562522 kubelet[1614]: E0113 20:54:21.562295 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:54:21.566360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:54:21.566837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:54:21.567769 systemd[1]: kubelet.service: Consumed 1.299s CPU time. Jan 13 20:54:21.665140 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 51774 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:54:21.666835 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:54:21.673556 systemd-logind[1441]: New session 4 of user core. Jan 13 20:54:21.683664 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:54:21.878809 sshd[1651]: Connection closed by 147.75.109.163 port 51774 Jan 13 20:54:21.880223 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Jan 13 20:54:21.884107 systemd[1]: sshd@3-10.128.0.13:22-147.75.109.163:51774.service: Deactivated successfully. Jan 13 20:54:21.886692 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:54:21.888703 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:54:21.890158 systemd-logind[1441]: Removed session 4. Jan 13 20:54:21.933817 systemd[1]: Started sshd@4-10.128.0.13:22-147.75.109.163:51776.service - OpenSSH per-connection server daemon (147.75.109.163:51776). Jan 13 20:54:22.230439 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 51776 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:54:22.232164 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:54:22.238489 systemd-logind[1441]: New session 5 of user core. Jan 13 20:54:22.249663 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:54:22.424743 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:54:22.425237 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:54:22.445337 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 13 20:54:22.487607 sshd[1658]: Connection closed by 147.75.109.163 port 51776 Jan 13 20:54:22.489122 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jan 13 20:54:22.493606 systemd[1]: sshd@4-10.128.0.13:22-147.75.109.163:51776.service: Deactivated successfully. Jan 13 20:54:22.496029 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:54:22.497965 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:54:22.499579 systemd-logind[1441]: Removed session 5. Jan 13 20:54:22.551913 systemd[1]: Started sshd@5-10.128.0.13:22-147.75.109.163:51792.service - OpenSSH per-connection server daemon (147.75.109.163:51792). Jan 13 20:54:22.842558 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 51792 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:54:22.844217 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:54:22.849645 systemd-logind[1441]: New session 6 of user core. Jan 13 20:54:22.860774 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:54:23.021318 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:54:23.021845 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:54:23.026969 sudo[1668]: pam_unix(sudo:session): session closed for user root Jan 13 20:54:23.040580 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:54:23.041069 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:54:23.058027 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:54:23.108597 augenrules[1690]: No rules Jan 13 20:54:23.108273 sudo[1667]: pam_unix(sudo:session): session closed for user root Jan 13 20:54:23.106325 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:54:23.106570 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:54:23.151044 sshd[1666]: Connection closed by 147.75.109.163 port 51792 Jan 13 20:54:23.151945 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jan 13 20:54:23.156260 systemd[1]: sshd@5-10.128.0.13:22-147.75.109.163:51792.service: Deactivated successfully. Jan 13 20:54:23.158609 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:54:23.160656 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:54:23.162084 systemd-logind[1441]: Removed session 6. Jan 13 20:54:23.210831 systemd[1]: Started sshd@6-10.128.0.13:22-147.75.109.163:51800.service - OpenSSH per-connection server daemon (147.75.109.163:51800). Jan 13 20:54:23.510168 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 51800 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:54:23.511988 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:54:23.518377 systemd-logind[1441]: New session 7 of user core. Jan 13 20:54:23.524662 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:54:23.689106 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:54:23.689635 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:54:24.134248 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:54:24.143018 (dockerd)[1719]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:54:24.556956 dockerd[1719]: time="2025-01-13T20:54:24.556780425Z" level=info msg="Starting up" Jan 13 20:54:24.768064 dockerd[1719]: time="2025-01-13T20:54:24.767972331Z" level=info msg="Loading containers: start." Jan 13 20:54:24.984438 kernel: Initializing XFRM netlink socket Jan 13 20:54:25.098122 systemd-networkd[1366]: docker0: Link UP Jan 13 20:54:25.135460 dockerd[1719]: time="2025-01-13T20:54:25.135392270Z" level=info msg="Loading containers: done." Jan 13 20:54:25.156526 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1467803988-merged.mount: Deactivated successfully. Jan 13 20:54:25.157181 dockerd[1719]: time="2025-01-13T20:54:25.157118065Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:54:25.157295 dockerd[1719]: time="2025-01-13T20:54:25.157276231Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:54:25.157500 dockerd[1719]: time="2025-01-13T20:54:25.157468809Z" level=info msg="Daemon has completed initialization" Jan 13 20:54:25.200824 dockerd[1719]: time="2025-01-13T20:54:25.200742884Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:54:25.201234 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:54:26.201168 containerd[1454]: time="2025-01-13T20:54:26.200990390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:54:26.630102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881049723.mount: Deactivated successfully. Jan 13 20:54:28.464220 containerd[1454]: time="2025-01-13T20:54:28.464120144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:28.465877 containerd[1454]: time="2025-01-13T20:54:28.465820003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35145882" Jan 13 20:54:28.468430 containerd[1454]: time="2025-01-13T20:54:28.467105595Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:28.470840 containerd[1454]: time="2025-01-13T20:54:28.470797970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:28.472228 containerd[1454]: time="2025-01-13T20:54:28.472181933Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.271126331s" Jan 13 20:54:28.472342 containerd[1454]: time="2025-01-13T20:54:28.472237041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:54:28.504675 containerd[1454]: time="2025-01-13T20:54:28.504629143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:54:30.379366 containerd[1454]: time="2025-01-13T20:54:30.379290644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:30.380987 containerd[1454]: time="2025-01-13T20:54:30.380936450Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32219666" Jan 13 20:54:30.382458 containerd[1454]: time="2025-01-13T20:54:30.382360498Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:30.385961 containerd[1454]: time="2025-01-13T20:54:30.385899970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:30.387599 containerd[1454]: time="2025-01-13T20:54:30.387334722Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.882652877s" Jan 13 20:54:30.387599 containerd[1454]: time="2025-01-13T20:54:30.387399132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:54:30.418951 containerd[1454]: time="2025-01-13T20:54:30.418903544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:54:31.446671 containerd[1454]: time="2025-01-13T20:54:31.446596992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:31.448316 containerd[1454]: time="2025-01-13T20:54:31.448233493Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17334738" Jan 13 20:54:31.449528 containerd[1454]: time="2025-01-13T20:54:31.449480417Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:31.453251 containerd[1454]: time="2025-01-13T20:54:31.453178733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:31.454932 containerd[1454]: time="2025-01-13T20:54:31.454706702Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.035579916s" Jan 13 20:54:31.454932 containerd[1454]: time="2025-01-13T20:54:31.454752182Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:54:31.487108 containerd[1454]: time="2025-01-13T20:54:31.487057189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:54:31.816915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:54:31.822724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:54:32.083650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:54:32.086518 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:54:32.162728 kubelet[1995]: E0113 20:54:32.162625 1995 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:54:32.169927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:54:32.170314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:54:32.667009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965204800.mount: Deactivated successfully. Jan 13 20:54:33.218431 containerd[1454]: time="2025-01-13T20:54:33.218337034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:33.219614 containerd[1454]: time="2025-01-13T20:54:33.219550211Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28621853" Jan 13 20:54:33.222314 containerd[1454]: time="2025-01-13T20:54:33.220701354Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:33.224423 containerd[1454]: time="2025-01-13T20:54:33.223325914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:33.224423 containerd[1454]: time="2025-01-13T20:54:33.224239128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.737126141s" Jan 13 20:54:33.224423 containerd[1454]: time="2025-01-13T20:54:33.224284303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:54:33.256211 containerd[1454]: time="2025-01-13T20:54:33.256107208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:54:33.636489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394360377.mount: Deactivated successfully. Jan 13 20:54:34.698053 containerd[1454]: time="2025-01-13T20:54:34.697981682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:34.699727 containerd[1454]: time="2025-01-13T20:54:34.699661296Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 13 20:54:34.701451 containerd[1454]: time="2025-01-13T20:54:34.701199278Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:34.704876 containerd[1454]: time="2025-01-13T20:54:34.704799840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:34.707225 containerd[1454]: time="2025-01-13T20:54:34.706763643Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.450596472s" Jan 13 20:54:34.707225 containerd[1454]: time="2025-01-13T20:54:34.706811752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:54:34.739637 containerd[1454]: time="2025-01-13T20:54:34.739576603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:54:35.086538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708584176.mount: Deactivated successfully. Jan 13 20:54:35.092014 containerd[1454]: time="2025-01-13T20:54:35.091944130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:35.093198 containerd[1454]: time="2025-01-13T20:54:35.093133432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 13 20:54:35.094265 containerd[1454]: time="2025-01-13T20:54:35.094190501Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:35.097278 containerd[1454]: time="2025-01-13T20:54:35.097215055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:35.098722 containerd[1454]: time="2025-01-13T20:54:35.098317851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 358.689043ms" Jan 13 20:54:35.098722 containerd[1454]: time="2025-01-13T20:54:35.098362779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:54:35.129353 containerd[1454]: time="2025-01-13T20:54:35.129277124Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:54:35.532941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471232439.mount: Deactivated successfully. Jan 13 20:54:37.712080 containerd[1454]: time="2025-01-13T20:54:37.712007305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:37.713793 containerd[1454]: time="2025-01-13T20:54:37.713727966Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Jan 13 20:54:37.714797 containerd[1454]: time="2025-01-13T20:54:37.714732603Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:37.718353 containerd[1454]: time="2025-01-13T20:54:37.718312801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:37.720157 containerd[1454]: time="2025-01-13T20:54:37.719983689Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.590656452s" Jan 13 20:54:37.720157 containerd[1454]: time="2025-01-13T20:54:37.720030589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:54:41.959932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:54:41.972832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:54:42.013428 systemd[1]: Reloading requested from client PID 2185 ('systemctl') (unit session-7.scope)... Jan 13 20:54:42.013455 systemd[1]: Reloading... Jan 13 20:54:42.172453 zram_generator::config[2222]: No configuration found. Jan 13 20:54:42.315992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:54:42.417368 systemd[1]: Reloading finished in 403 ms. Jan 13 20:54:42.473915 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:54:42.474061 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:54:42.474500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:54:42.478890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:54:42.699283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:54:42.712978 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:54:42.776700 kubelet[2276]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:54:42.776700 kubelet[2276]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:54:42.776700 kubelet[2276]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:54:42.777267 kubelet[2276]: I0113 20:54:42.776761 2276 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:54:43.270964 kubelet[2276]: I0113 20:54:43.270909 2276 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:54:43.270964 kubelet[2276]: I0113 20:54:43.270946 2276 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:54:43.271307 kubelet[2276]: I0113 20:54:43.271271 2276 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:54:43.294631 kubelet[2276]: E0113 20:54:43.294557 2276 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.295033 kubelet[2276]: I0113 20:54:43.294877 2276 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:54:43.312979 kubelet[2276]: I0113 20:54:43.312937 2276 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:54:43.313390 kubelet[2276]: I0113 20:54:43.313355 2276 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:54:43.313716 kubelet[2276]: I0113 20:54:43.313677 2276 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:54:43.313918 kubelet[2276]: I0113 20:54:43.313718 2276 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:54:43.313918 kubelet[2276]: I0113 20:54:43.313737 2276 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:54:43.315985 kubelet[2276]: I0113 20:54:43.315943 2276 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:54:43.316437 kubelet[2276]: I0113 20:54:43.316122 2276 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:54:43.316437 kubelet[2276]: I0113 20:54:43.316154 2276 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:54:43.316437 kubelet[2276]: I0113 20:54:43.316203 2276 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:54:43.316437 kubelet[2276]: I0113 20:54:43.316224 2276 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:54:43.318709 kubelet[2276]: W0113 20:54:43.318644 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.318888 kubelet[2276]: E0113 20:54:43.318869 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.319569 kubelet[2276]: W0113 20:54:43.319086 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.319569 kubelet[2276]: E0113 20:54:43.319143 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.319949 kubelet[2276]: I0113 20:54:43.319929 2276 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:54:43.324552 kubelet[2276]: I0113 20:54:43.324507 2276 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:54:43.326286 kubelet[2276]: W0113 20:54:43.326246 2276 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:54:43.327101 kubelet[2276]: I0113 20:54:43.327075 2276 server.go:1256] "Started kubelet" Jan 13 20:54:43.327375 kubelet[2276]: I0113 20:54:43.327326 2276 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:54:43.329063 kubelet[2276]: I0113 20:54:43.328626 2276 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:54:43.332554 kubelet[2276]: I0113 20:54:43.331701 2276 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:54:43.334839 kubelet[2276]: I0113 20:54:43.334358 2276 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:54:43.334839 kubelet[2276]: I0113 20:54:43.334655 2276 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:54:43.340728 kubelet[2276]: E0113 20:54:43.340697 2276 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal.181a5bedd4022ad4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,UID:ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 20:54:43.327036116 +0000 UTC m=+0.607678977,LastTimestamp:2025-01-13 20:54:43.327036116 +0000 UTC m=+0.607678977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,}" Jan 13 20:54:43.340902 kubelet[2276]: I0113 20:54:43.340841 2276 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:54:43.340971 kubelet[2276]: I0113 20:54:43.340962 2276 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:54:43.341054 kubelet[2276]: I0113 20:54:43.341034 2276 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:54:43.342674 kubelet[2276]: W0113 20:54:43.342605 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.342780 kubelet[2276]: E0113 20:54:43.342686 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.344456 kubelet[2276]: E0113 20:54:43.344382 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="200ms" Jan 13 20:54:43.345768 kubelet[2276]: I0113 20:54:43.345145 2276 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:54:43.345768 kubelet[2276]: I0113 20:54:43.345240 2276 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:54:43.347671 kubelet[2276]: I0113 20:54:43.347643 2276 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:54:43.360573 kubelet[2276]: I0113 20:54:43.360535 2276 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:54:43.362605 kubelet[2276]: I0113 20:54:43.362567 2276 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:54:43.362767 kubelet[2276]: I0113 20:54:43.362753 2276 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:54:43.362888 kubelet[2276]: I0113 20:54:43.362876 2276 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:54:43.363037 kubelet[2276]: E0113 20:54:43.363023 2276 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:54:43.372657 kubelet[2276]: E0113 20:54:43.372626 2276 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:54:43.373272 kubelet[2276]: W0113 20:54:43.373142 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.373272 kubelet[2276]: E0113 20:54:43.373214 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:43.387577 kubelet[2276]: I0113 20:54:43.387543 2276 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:54:43.387721 kubelet[2276]: I0113 20:54:43.387671 2276 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:54:43.387798 kubelet[2276]: I0113 20:54:43.387742 2276 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:54:43.390548 kubelet[2276]: I0113 20:54:43.390508 2276 policy_none.go:49] "None policy: Start" Jan 13 20:54:43.391273 kubelet[2276]: I0113 20:54:43.391253 2276 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:54:43.391561 kubelet[2276]: I0113 20:54:43.391539 2276 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:54:43.399460 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:54:43.414833 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:54:43.429146 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:54:43.431497 kubelet[2276]: I0113 20:54:43.431466 2276 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:54:43.431878 kubelet[2276]: I0113 20:54:43.431853 2276 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:54:43.434733 kubelet[2276]: E0113 20:54:43.434548 2276 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" not found" Jan 13 20:54:43.447171 kubelet[2276]: I0113 20:54:43.447134 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.447770 kubelet[2276]: E0113 20:54:43.447715 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.463975 kubelet[2276]: I0113 20:54:43.463907 2276 topology_manager.go:215] "Topology Admit Handler" podUID="32721db632fc84b478f486183e96c62c" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.471369 kubelet[2276]: I0113 20:54:43.471247 2276 topology_manager.go:215] "Topology Admit Handler" podUID="317f16f239bf657d6b2dce62249bc34b" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.476593 kubelet[2276]: I0113 20:54:43.476565 2276 topology_manager.go:215] "Topology Admit Handler" podUID="2d8bb42d720fe61dd0a7fe97bbf20d60" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.484456 systemd[1]: Created slice kubepods-burstable-pod32721db632fc84b478f486183e96c62c.slice - libcontainer container kubepods-burstable-pod32721db632fc84b478f486183e96c62c.slice. Jan 13 20:54:43.503582 systemd[1]: Created slice kubepods-burstable-pod317f16f239bf657d6b2dce62249bc34b.slice - libcontainer container kubepods-burstable-pod317f16f239bf657d6b2dce62249bc34b.slice. Jan 13 20:54:43.512906 systemd[1]: Created slice kubepods-burstable-pod2d8bb42d720fe61dd0a7fe97bbf20d60.slice - libcontainer container kubepods-burstable-pod2d8bb42d720fe61dd0a7fe97bbf20d60.slice. Jan 13 20:54:43.545279 kubelet[2276]: E0113 20:54:43.545143 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="400ms" Jan 13 20:54:43.642634 kubelet[2276]: I0113 20:54:43.642575 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32721db632fc84b478f486183e96c62c-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"32721db632fc84b478f486183e96c62c\") " pod="kube-system/kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.642634 kubelet[2276]: I0113 20:54:43.642643 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.642893 kubelet[2276]: I0113 20:54:43.642679 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.642893 kubelet[2276]: I0113 20:54:43.642712 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32721db632fc84b478f486183e96c62c-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"32721db632fc84b478f486183e96c62c\") " pod="kube-system/kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.642893 kubelet[2276]: I0113 20:54:43.642750 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32721db632fc84b478f486183e96c62c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"32721db632fc84b478f486183e96c62c\") " pod="kube-system/kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.642893 kubelet[2276]: I0113 20:54:43.642784 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.643098 kubelet[2276]: I0113 20:54:43.642817 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.643098 kubelet[2276]: I0113 20:54:43.642858 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.643098 kubelet[2276]: I0113 20:54:43.642906 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d8bb42d720fe61dd0a7fe97bbf20d60-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"2d8bb42d720fe61dd0a7fe97bbf20d60\") " pod="kube-system/kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.698465 kubelet[2276]: I0113 20:54:43.698432 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.698936 kubelet[2276]: E0113 20:54:43.698913 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:43.799023 containerd[1454]: time="2025-01-13T20:54:43.798856942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,Uid:32721db632fc84b478f486183e96c62c,Namespace:kube-system,Attempt:0,}" Jan 13 20:54:43.810823 containerd[1454]: time="2025-01-13T20:54:43.810762189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,Uid:317f16f239bf657d6b2dce62249bc34b,Namespace:kube-system,Attempt:0,}" Jan 13 20:54:43.817173 containerd[1454]: time="2025-01-13T20:54:43.817127907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,Uid:2d8bb42d720fe61dd0a7fe97bbf20d60,Namespace:kube-system,Attempt:0,}" Jan 13 20:54:43.946549 kubelet[2276]: E0113 20:54:43.946494 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="800ms" Jan 13 20:54:44.103697 kubelet[2276]: I0113 20:54:44.103550 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:44.104353 kubelet[2276]: E0113 20:54:44.104326 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:44.171652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110625496.mount: Deactivated successfully. Jan 13 20:54:44.180501 containerd[1454]: time="2025-01-13T20:54:44.180443013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:54:44.185032 containerd[1454]: time="2025-01-13T20:54:44.184963619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 13 20:54:44.186179 containerd[1454]: time="2025-01-13T20:54:44.186125644Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:54:44.187357 containerd[1454]: time="2025-01-13T20:54:44.187303710Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:54:44.189392 containerd[1454]: time="2025-01-13T20:54:44.189337557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:54:44.193005 containerd[1454]: time="2025-01-13T20:54:44.192955038Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:54:44.193815 containerd[1454]: time="2025-01-13T20:54:44.193657282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:54:44.195006 containerd[1454]: time="2025-01-13T20:54:44.194896587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:54:44.197234 containerd[1454]: time="2025-01-13T20:54:44.197196148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 379.953633ms" Jan 13 20:54:44.199497 containerd[1454]: time="2025-01-13T20:54:44.199451020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 400.453788ms" Jan 13 20:54:44.202811 containerd[1454]: time="2025-01-13T20:54:44.202761839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 391.877281ms" Jan 13 20:54:44.233226 kubelet[2276]: W0113 20:54:44.233088 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.233226 kubelet[2276]: E0113 20:54:44.233187 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.255223 kubelet[2276]: W0113 20:54:44.254924 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.255223 kubelet[2276]: E0113 20:54:44.255012 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.321441 kubelet[2276]: W0113 20:54:44.321340 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.321720 kubelet[2276]: E0113 20:54:44.321701 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.351768 kubelet[2276]: W0113 20:54:44.351641 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.351768 kubelet[2276]: E0113 20:54:44.351739 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Jan 13 20:54:44.400191 containerd[1454]: time="2025-01-13T20:54:44.397919066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:54:44.400191 containerd[1454]: time="2025-01-13T20:54:44.397988972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:54:44.400191 containerd[1454]: time="2025-01-13T20:54:44.398015128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:44.401893 containerd[1454]: time="2025-01-13T20:54:44.401706308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:44.407507 containerd[1454]: time="2025-01-13T20:54:44.396084828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:54:44.407507 containerd[1454]: time="2025-01-13T20:54:44.407336613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:54:44.407507 containerd[1454]: time="2025-01-13T20:54:44.407378422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:44.408156 containerd[1454]: time="2025-01-13T20:54:44.408052922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:44.412271 containerd[1454]: time="2025-01-13T20:54:44.411938322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:54:44.412271 containerd[1454]: time="2025-01-13T20:54:44.412027080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:54:44.412271 containerd[1454]: time="2025-01-13T20:54:44.412056850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:44.412271 containerd[1454]: time="2025-01-13T20:54:44.412177054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:44.454669 systemd[1]: Started cri-containerd-0e6c0dbc6e986568b2a2838cd48ae17c99eb3298acfa58361f342e53a61edff7.scope - libcontainer container 0e6c0dbc6e986568b2a2838cd48ae17c99eb3298acfa58361f342e53a61edff7. Jan 13 20:54:44.465926 systemd[1]: Started cri-containerd-80e1cfb36bc57e321cdfd030e64c92e284562616ebc39073648d1b799cc64950.scope - libcontainer container 80e1cfb36bc57e321cdfd030e64c92e284562616ebc39073648d1b799cc64950. Jan 13 20:54:44.470116 systemd[1]: Started cri-containerd-f0c92301ee1966d6db035fe6bec2c400c6e446d92178150fb359533a4ad594cd.scope - libcontainer container f0c92301ee1966d6db035fe6bec2c400c6e446d92178150fb359533a4ad594cd. Jan 13 20:54:44.550480 containerd[1454]: time="2025-01-13T20:54:44.550225891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,Uid:317f16f239bf657d6b2dce62249bc34b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e6c0dbc6e986568b2a2838cd48ae17c99eb3298acfa58361f342e53a61edff7\"" Jan 13 20:54:44.553920 kubelet[2276]: E0113 20:54:44.553878 2276 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flat" Jan 13 20:54:44.565278 containerd[1454]: time="2025-01-13T20:54:44.565217112Z" level=info msg="CreateContainer within sandbox \"0e6c0dbc6e986568b2a2838cd48ae17c99eb3298acfa58361f342e53a61edff7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:54:44.580864 containerd[1454]: time="2025-01-13T20:54:44.580793576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,Uid:32721db632fc84b478f486183e96c62c,Namespace:kube-system,Attempt:0,} returns sandbox id \"80e1cfb36bc57e321cdfd030e64c92e284562616ebc39073648d1b799cc64950\"" Jan 13 20:54:44.585431 kubelet[2276]: E0113 20:54:44.584351 2276 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-21291" Jan 13 20:54:44.587008 containerd[1454]: time="2025-01-13T20:54:44.586966943Z" level=info msg="CreateContainer within sandbox \"80e1cfb36bc57e321cdfd030e64c92e284562616ebc39073648d1b799cc64950\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:54:44.591523 containerd[1454]: time="2025-01-13T20:54:44.591452858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal,Uid:2d8bb42d720fe61dd0a7fe97bbf20d60,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0c92301ee1966d6db035fe6bec2c400c6e446d92178150fb359533a4ad594cd\"" Jan 13 20:54:44.594718 kubelet[2276]: E0113 20:54:44.594687 2276 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-21291" Jan 13 20:54:44.597144 containerd[1454]: time="2025-01-13T20:54:44.597114849Z" level=info msg="CreateContainer within sandbox \"f0c92301ee1966d6db035fe6bec2c400c6e446d92178150fb359533a4ad594cd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:54:44.604023 containerd[1454]: time="2025-01-13T20:54:44.603986904Z" level=info msg="CreateContainer within sandbox \"0e6c0dbc6e986568b2a2838cd48ae17c99eb3298acfa58361f342e53a61edff7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6014e33e88966549ac575dc58015bf5797435c6082267f8818c7a458e6be3045\"" Jan 13 20:54:44.605668 containerd[1454]: time="2025-01-13T20:54:44.605612537Z" level=info msg="StartContainer for \"6014e33e88966549ac575dc58015bf5797435c6082267f8818c7a458e6be3045\"" Jan 13 20:54:44.609185 containerd[1454]: time="2025-01-13T20:54:44.609146172Z" level=info msg="CreateContainer within sandbox \"80e1cfb36bc57e321cdfd030e64c92e284562616ebc39073648d1b799cc64950\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"794526ce86f95bfe53fa670880668440922161554370162a0367253705e0db5e\"" Jan 13 20:54:44.610591 containerd[1454]: time="2025-01-13T20:54:44.610553904Z" level=info msg="StartContainer for \"794526ce86f95bfe53fa670880668440922161554370162a0367253705e0db5e\"" Jan 13 20:54:44.631383 containerd[1454]: time="2025-01-13T20:54:44.631332653Z" level=info msg="CreateContainer within sandbox \"f0c92301ee1966d6db035fe6bec2c400c6e446d92178150fb359533a4ad594cd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9a732b789f265d0f5ad6d3babf850773c5af188e5855f0048e441e338670b4b9\"" Jan 13 20:54:44.633470 containerd[1454]: time="2025-01-13T20:54:44.632998481Z" level=info msg="StartContainer for \"9a732b789f265d0f5ad6d3babf850773c5af188e5855f0048e441e338670b4b9\"" Jan 13 20:54:44.668650 systemd[1]: Started cri-containerd-794526ce86f95bfe53fa670880668440922161554370162a0367253705e0db5e.scope - libcontainer container 794526ce86f95bfe53fa670880668440922161554370162a0367253705e0db5e. Jan 13 20:54:44.681894 systemd[1]: Started cri-containerd-6014e33e88966549ac575dc58015bf5797435c6082267f8818c7a458e6be3045.scope - libcontainer container 6014e33e88966549ac575dc58015bf5797435c6082267f8818c7a458e6be3045. Jan 13 20:54:44.738608 systemd[1]: Started cri-containerd-9a732b789f265d0f5ad6d3babf850773c5af188e5855f0048e441e338670b4b9.scope - libcontainer container 9a732b789f265d0f5ad6d3babf850773c5af188e5855f0048e441e338670b4b9. Jan 13 20:54:44.748345 kubelet[2276]: E0113 20:54:44.748304 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="1.6s" Jan 13 20:54:44.789525 containerd[1454]: time="2025-01-13T20:54:44.789202574Z" level=info msg="StartContainer for \"794526ce86f95bfe53fa670880668440922161554370162a0367253705e0db5e\" returns successfully" Jan 13 20:54:44.828999 containerd[1454]: time="2025-01-13T20:54:44.828938133Z" level=info msg="StartContainer for \"6014e33e88966549ac575dc58015bf5797435c6082267f8818c7a458e6be3045\" returns successfully" Jan 13 20:54:44.851462 containerd[1454]: time="2025-01-13T20:54:44.851166314Z" level=info msg="StartContainer for \"9a732b789f265d0f5ad6d3babf850773c5af188e5855f0048e441e338670b4b9\" returns successfully" Jan 13 20:54:44.913649 kubelet[2276]: I0113 20:54:44.913610 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:44.914504 kubelet[2276]: E0113 20:54:44.914474 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:46.521531 kubelet[2276]: I0113 20:54:46.520465 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:47.984353 kubelet[2276]: E0113 20:54:47.984307 2276 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" not found" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:48.096153 kubelet[2276]: I0113 20:54:48.096098 2276 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:48.320120 kubelet[2276]: I0113 20:54:48.319935 2276 apiserver.go:52] "Watching apiserver" Jan 13 20:54:48.341741 kubelet[2276]: I0113 20:54:48.341674 2276 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:54:48.610200 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:54:50.884525 systemd[1]: Reloading requested from client PID 2550 ('systemctl') (unit session-7.scope)... Jan 13 20:54:50.884546 systemd[1]: Reloading... Jan 13 20:54:51.008770 zram_generator::config[2589]: No configuration found. Jan 13 20:54:51.183644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:54:51.314926 systemd[1]: Reloading finished in 429 ms. Jan 13 20:54:51.374835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:54:51.375928 kubelet[2276]: I0113 20:54:51.375803 2276 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:54:51.390329 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:54:51.390646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:54:51.390722 systemd[1]: kubelet.service: Consumed 1.145s CPU time, 113.1M memory peak, 0B memory swap peak. Jan 13 20:54:51.398133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:54:51.647360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:54:51.662092 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:54:51.750646 kubelet[2639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:54:51.750646 kubelet[2639]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:54:51.751522 kubelet[2639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:54:51.751522 kubelet[2639]: I0113 20:54:51.750774 2639 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:54:51.760454 kubelet[2639]: I0113 20:54:51.760379 2639 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:54:51.761120 kubelet[2639]: I0113 20:54:51.761027 2639 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:54:51.761498 kubelet[2639]: I0113 20:54:51.761475 2639 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:54:51.765924 kubelet[2639]: I0113 20:54:51.765877 2639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:54:51.770021 kubelet[2639]: I0113 20:54:51.769988 2639 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:54:51.777908 sudo[2653]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:54:51.779016 sudo[2653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:54:51.794438 kubelet[2639]: I0113 20:54:51.793894 2639 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:54:51.794438 kubelet[2639]: I0113 20:54:51.794321 2639 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:54:51.794919 kubelet[2639]: I0113 20:54:51.794892 2639 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:54:51.795139 kubelet[2639]: I0113 20:54:51.795122 2639 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:54:51.795249 kubelet[2639]: I0113 20:54:51.795236 2639 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:54:51.795377 kubelet[2639]: I0113 20:54:51.795365 2639 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:54:51.795637 kubelet[2639]: I0113 20:54:51.795619 2639 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:54:51.795756 kubelet[2639]: I0113 20:54:51.795742 2639 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:54:51.795879 kubelet[2639]: I0113 20:54:51.795865 2639 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:54:51.795980 kubelet[2639]: I0113 20:54:51.795966 2639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:54:51.798628 kubelet[2639]: I0113 20:54:51.798598 2639 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:54:51.798909 kubelet[2639]: I0113 20:54:51.798887 2639 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:54:51.801573 kubelet[2639]: I0113 20:54:51.801544 2639 server.go:1256] "Started kubelet" Jan 13 20:54:51.806953 kubelet[2639]: I0113 20:54:51.806924 2639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:54:51.821255 kubelet[2639]: I0113 20:54:51.821211 2639 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:54:51.822642 kubelet[2639]: I0113 20:54:51.822521 2639 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:54:51.827357 kubelet[2639]: E0113 20:54:51.827242 2639 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:54:51.827735 kubelet[2639]: I0113 20:54:51.827609 2639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:54:51.827980 kubelet[2639]: I0113 20:54:51.827965 2639 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:54:51.830208 kubelet[2639]: I0113 20:54:51.830177 2639 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:54:51.830809 kubelet[2639]: I0113 20:54:51.830699 2639 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:54:51.830922 kubelet[2639]: I0113 20:54:51.830910 2639 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:54:51.839441 kubelet[2639]: I0113 20:54:51.838866 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:54:51.843307 kubelet[2639]: I0113 20:54:51.843232 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:54:51.844095 kubelet[2639]: I0113 20:54:51.843581 2639 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:54:51.844371 kubelet[2639]: I0113 20:54:51.844236 2639 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:54:51.844371 kubelet[2639]: E0113 20:54:51.844321 2639 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:54:51.867902 kubelet[2639]: I0113 20:54:51.867445 2639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:54:51.873453 kubelet[2639]: I0113 20:54:51.873381 2639 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:54:51.873852 kubelet[2639]: I0113 20:54:51.873612 2639 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:54:51.937170 kubelet[2639]: I0113 20:54:51.936661 2639 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:51.944923 kubelet[2639]: E0113 20:54:51.944879 2639 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:54:51.962154 kubelet[2639]: I0113 20:54:51.962108 2639 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:51.964655 kubelet[2639]: I0113 20:54:51.964625 2639 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.012511 kubelet[2639]: I0113 20:54:52.012455 2639 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:54:52.012511 kubelet[2639]: I0113 20:54:52.012500 2639 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:54:52.012511 kubelet[2639]: I0113 20:54:52.012525 2639 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:54:52.012814 kubelet[2639]: I0113 20:54:52.012736 2639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:54:52.012814 kubelet[2639]: I0113 20:54:52.012768 2639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:54:52.012814 kubelet[2639]: I0113 20:54:52.012779 2639 policy_none.go:49] "None policy: Start" Jan 13 20:54:52.016637 kubelet[2639]: I0113 20:54:52.015832 2639 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:54:52.016637 kubelet[2639]: I0113 20:54:52.015867 2639 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:54:52.016637 kubelet[2639]: I0113 20:54:52.016077 2639 state_mem.go:75] "Updated machine memory state" Jan 13 20:54:52.026287 kubelet[2639]: I0113 20:54:52.026261 2639 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:54:52.031371 kubelet[2639]: I0113 20:54:52.031339 2639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:54:52.145847 kubelet[2639]: I0113 20:54:52.145801 2639 topology_manager.go:215] "Topology Admit Handler" podUID="32721db632fc84b478f486183e96c62c" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.146031 kubelet[2639]: I0113 20:54:52.145926 2639 topology_manager.go:215] "Topology Admit Handler" podUID="317f16f239bf657d6b2dce62249bc34b" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.146031 kubelet[2639]: I0113 20:54:52.145979 2639 topology_manager.go:215] "Topology Admit Handler" podUID="2d8bb42d720fe61dd0a7fe97bbf20d60" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.167434 kubelet[2639]: W0113 20:54:52.164637 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 20:54:52.167434 kubelet[2639]: W0113 20:54:52.165079 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 20:54:52.167434 kubelet[2639]: W0113 20:54:52.165130 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 20:54:52.234992 kubelet[2639]: I0113 20:54:52.234863 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d8bb42d720fe61dd0a7fe97bbf20d60-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"2d8bb42d720fe61dd0a7fe97bbf20d60\") " pod="kube-system/kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.234992 kubelet[2639]: I0113 20:54:52.234927 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32721db632fc84b478f486183e96c62c-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"32721db632fc84b478f486183e96c62c\") " pod="kube-system/kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.234992 kubelet[2639]: I0113 20:54:52.234964 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.235259 kubelet[2639]: I0113 20:54:52.234998 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.235259 kubelet[2639]: I0113 20:54:52.235051 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.235259 kubelet[2639]: I0113 20:54:52.235093 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.235259 kubelet[2639]: I0113 20:54:52.235134 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32721db632fc84b478f486183e96c62c-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"32721db632fc84b478f486183e96c62c\") " pod="kube-system/kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.235497 kubelet[2639]: I0113 20:54:52.235172 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32721db632fc84b478f486183e96c62c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"32721db632fc84b478f486183e96c62c\") " pod="kube-system/kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.235497 kubelet[2639]: I0113 20:54:52.235211 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/317f16f239bf657d6b2dce62249bc34b-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" (UID: \"317f16f239bf657d6b2dce62249bc34b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" Jan 13 20:54:52.646559 sudo[2653]: pam_unix(sudo:session): session closed for user root Jan 13 20:54:52.804613 kubelet[2639]: I0113 20:54:52.804570 2639 apiserver.go:52] "Watching apiserver" Jan 13 20:54:52.831458 kubelet[2639]: I0113 20:54:52.831163 2639 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:54:52.972863 kubelet[2639]: I0113 20:54:52.972707 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" podStartSLOduration=0.9726225 podStartE2EDuration="972.6225ms" podCreationTimestamp="2025-01-13 20:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:54:52.971558913 +0000 UTC m=+1.302068485" watchObservedRunningTime="2025-01-13 20:54:52.9726225 +0000 UTC m=+1.303132060" Jan 13 20:54:52.973595 kubelet[2639]: I0113 20:54:52.973273 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" podStartSLOduration=0.973213305 podStartE2EDuration="973.213305ms" podCreationTimestamp="2025-01-13 20:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:54:52.958976765 +0000 UTC m=+1.289486337" watchObservedRunningTime="2025-01-13 20:54:52.973213305 +0000 UTC m=+1.303722857" Jan 13 20:54:52.999799 kubelet[2639]: I0113 20:54:52.999402 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" podStartSLOduration=0.999345889 podStartE2EDuration="999.345889ms" podCreationTimestamp="2025-01-13 20:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:54:52.98709916 +0000 UTC m=+1.317608729" watchObservedRunningTime="2025-01-13 20:54:52.999345889 +0000 UTC m=+1.329855462" Jan 13 20:54:54.660752 sudo[1701]: pam_unix(sudo:session): session closed for user root Jan 13 20:54:54.703144 sshd[1700]: Connection closed by 147.75.109.163 port 51800 Jan 13 20:54:54.704032 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jan 13 20:54:54.708466 systemd[1]: sshd@6-10.128.0.13:22-147.75.109.163:51800.service: Deactivated successfully. Jan 13 20:54:54.712144 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:54:54.712393 systemd[1]: session-7.scope: Consumed 7.504s CPU time, 189.2M memory peak, 0B memory swap peak. Jan 13 20:54:54.715357 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:54:54.717482 systemd-logind[1441]: Removed session 7. Jan 13 20:55:02.688376 update_engine[1442]: I20250113 20:55:02.688282 1442 update_attempter.cc:509] Updating boot flags... Jan 13 20:55:02.756551 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2716) Jan 13 20:55:02.868687 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2720) Jan 13 20:55:02.986579 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2720) Jan 13 20:55:04.159446 kubelet[2639]: I0113 20:55:04.159381 2639 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:55:04.160084 containerd[1454]: time="2025-01-13T20:55:04.159878418Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:55:04.160730 kubelet[2639]: I0113 20:55:04.160213 2639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:55:04.660303 kubelet[2639]: I0113 20:55:04.660229 2639 topology_manager.go:215] "Topology Admit Handler" podUID="8c1bab34-5e34-4a23-82fa-6cc20c2b86fc" podNamespace="kube-system" podName="cilium-operator-5cc964979-kjh4x" Jan 13 20:55:04.677564 systemd[1]: Created slice kubepods-besteffort-pod8c1bab34_5e34_4a23_82fa_6cc20c2b86fc.slice - libcontainer container kubepods-besteffort-pod8c1bab34_5e34_4a23_82fa_6cc20c2b86fc.slice. Jan 13 20:55:04.711134 kubelet[2639]: I0113 20:55:04.711069 2639 topology_manager.go:215] "Topology Admit Handler" podUID="461d812a-4a3b-442c-90af-d139436b8162" podNamespace="kube-system" podName="kube-proxy-cp8fx" Jan 13 20:55:04.717493 kubelet[2639]: W0113 20:55:04.717441 2639 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal' and this object Jan 13 20:55:04.717645 kubelet[2639]: E0113 20:55:04.717504 2639 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal' and this object Jan 13 20:55:04.726473 systemd[1]: Created slice kubepods-besteffort-pod461d812a_4a3b_442c_90af_d139436b8162.slice - libcontainer container kubepods-besteffort-pod461d812a_4a3b_442c_90af_d139436b8162.slice. Jan 13 20:55:04.747307 kubelet[2639]: I0113 20:55:04.743683 2639 topology_manager.go:215] "Topology Admit Handler" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" podNamespace="kube-system" podName="cilium-t24nc" Jan 13 20:55:04.767535 systemd[1]: Created slice kubepods-burstable-pod0aaf1327_70f4_4727_a2d4_ff0db35bb2ae.slice - libcontainer container kubepods-burstable-pod0aaf1327_70f4_4727_a2d4_ff0db35bb2ae.slice. Jan 13 20:55:04.810112 kubelet[2639]: I0113 20:55:04.810073 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/461d812a-4a3b-442c-90af-d139436b8162-xtables-lock\") pod \"kube-proxy-cp8fx\" (UID: \"461d812a-4a3b-442c-90af-d139436b8162\") " pod="kube-system/kube-proxy-cp8fx" Jan 13 20:55:04.810386 kubelet[2639]: I0113 20:55:04.810368 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/461d812a-4a3b-442c-90af-d139436b8162-lib-modules\") pod \"kube-proxy-cp8fx\" (UID: \"461d812a-4a3b-442c-90af-d139436b8162\") " pod="kube-system/kube-proxy-cp8fx" Jan 13 20:55:04.810607 kubelet[2639]: I0113 20:55:04.810589 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7tbm\" (UniqueName: \"kubernetes.io/projected/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-kube-api-access-z7tbm\") pod \"cilium-operator-5cc964979-kjh4x\" (UID: \"8c1bab34-5e34-4a23-82fa-6cc20c2b86fc\") " pod="kube-system/cilium-operator-5cc964979-kjh4x" Jan 13 20:55:04.810773 kubelet[2639]: I0113 20:55:04.810757 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/461d812a-4a3b-442c-90af-d139436b8162-kube-proxy\") pod \"kube-proxy-cp8fx\" (UID: \"461d812a-4a3b-442c-90af-d139436b8162\") " pod="kube-system/kube-proxy-cp8fx" Jan 13 20:55:04.810905 kubelet[2639]: I0113 20:55:04.810889 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkcq8\" (UniqueName: \"kubernetes.io/projected/461d812a-4a3b-442c-90af-d139436b8162-kube-api-access-hkcq8\") pod \"kube-proxy-cp8fx\" (UID: \"461d812a-4a3b-442c-90af-d139436b8162\") " pod="kube-system/kube-proxy-cp8fx" Jan 13 20:55:04.811027 kubelet[2639]: I0113 20:55:04.811013 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-cilium-config-path\") pod \"cilium-operator-5cc964979-kjh4x\" (UID: \"8c1bab34-5e34-4a23-82fa-6cc20c2b86fc\") " pod="kube-system/cilium-operator-5cc964979-kjh4x" Jan 13 20:55:04.911437 kubelet[2639]: I0113 20:55:04.911273 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-xtables-lock\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911437 kubelet[2639]: I0113 20:55:04.911325 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-run\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911437 kubelet[2639]: I0113 20:55:04.911356 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-etc-cni-netd\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911437 kubelet[2639]: I0113 20:55:04.911395 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hostproc\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911437 kubelet[2639]: I0113 20:55:04.911443 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-lib-modules\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911806 kubelet[2639]: I0113 20:55:04.911475 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-clustermesh-secrets\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911806 kubelet[2639]: I0113 20:55:04.911507 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tx2w\" (UniqueName: \"kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-kube-api-access-8tx2w\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911806 kubelet[2639]: I0113 20:55:04.911578 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-bpf-maps\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911806 kubelet[2639]: I0113 20:55:04.911634 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cni-path\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911806 kubelet[2639]: I0113 20:55:04.911669 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-config-path\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.911806 kubelet[2639]: I0113 20:55:04.911775 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-net\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.912115 kubelet[2639]: I0113 20:55:04.911818 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-kernel\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.912115 kubelet[2639]: I0113 20:55:04.911855 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hubble-tls\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.912115 kubelet[2639]: I0113 20:55:04.911890 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-cgroup\") pod \"cilium-t24nc\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " pod="kube-system/cilium-t24nc" Jan 13 20:55:04.989797 containerd[1454]: time="2025-01-13T20:55:04.989749369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kjh4x,Uid:8c1bab34-5e34-4a23-82fa-6cc20c2b86fc,Namespace:kube-system,Attempt:0,}" Jan 13 20:55:05.053835 containerd[1454]: time="2025-01-13T20:55:05.052639314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:55:05.053835 containerd[1454]: time="2025-01-13T20:55:05.052719820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:55:05.053835 containerd[1454]: time="2025-01-13T20:55:05.052748088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:05.053835 containerd[1454]: time="2025-01-13T20:55:05.053671053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:05.074571 containerd[1454]: time="2025-01-13T20:55:05.074217063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t24nc,Uid:0aaf1327-70f4-4727-a2d4-ff0db35bb2ae,Namespace:kube-system,Attempt:0,}" Jan 13 20:55:05.091765 systemd[1]: Started cri-containerd-5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316.scope - libcontainer container 5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316. Jan 13 20:55:05.119851 containerd[1454]: time="2025-01-13T20:55:05.119400218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:55:05.119851 containerd[1454]: time="2025-01-13T20:55:05.119508184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:55:05.119851 containerd[1454]: time="2025-01-13T20:55:05.119535888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:05.119851 containerd[1454]: time="2025-01-13T20:55:05.119669835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:05.152895 systemd[1]: Started cri-containerd-439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503.scope - libcontainer container 439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503. Jan 13 20:55:05.178809 containerd[1454]: time="2025-01-13T20:55:05.178673938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kjh4x,Uid:8c1bab34-5e34-4a23-82fa-6cc20c2b86fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\"" Jan 13 20:55:05.186984 containerd[1454]: time="2025-01-13T20:55:05.186931381Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:55:05.200858 containerd[1454]: time="2025-01-13T20:55:05.200471103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t24nc,Uid:0aaf1327-70f4-4727-a2d4-ff0db35bb2ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\"" Jan 13 20:55:05.935449 containerd[1454]: time="2025-01-13T20:55:05.933234624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cp8fx,Uid:461d812a-4a3b-442c-90af-d139436b8162,Namespace:kube-system,Attempt:0,}" Jan 13 20:55:05.973707 containerd[1454]: time="2025-01-13T20:55:05.973537159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:55:05.973707 containerd[1454]: time="2025-01-13T20:55:05.973646378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:55:05.973707 containerd[1454]: time="2025-01-13T20:55:05.973671825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:05.974620 containerd[1454]: time="2025-01-13T20:55:05.974383186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:06.004633 systemd[1]: Started cri-containerd-734fc46bf2d6cb0efab0a421c2383ca7f5ac5d5f26f09dca87e8822160e6369a.scope - libcontainer container 734fc46bf2d6cb0efab0a421c2383ca7f5ac5d5f26f09dca87e8822160e6369a. Jan 13 20:55:06.039249 containerd[1454]: time="2025-01-13T20:55:06.039193551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cp8fx,Uid:461d812a-4a3b-442c-90af-d139436b8162,Namespace:kube-system,Attempt:0,} returns sandbox id \"734fc46bf2d6cb0efab0a421c2383ca7f5ac5d5f26f09dca87e8822160e6369a\"" Jan 13 20:55:06.054920 containerd[1454]: time="2025-01-13T20:55:06.054861584Z" level=info msg="CreateContainer within sandbox \"734fc46bf2d6cb0efab0a421c2383ca7f5ac5d5f26f09dca87e8822160e6369a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:55:06.077661 containerd[1454]: time="2025-01-13T20:55:06.077614366Z" level=info msg="CreateContainer within sandbox \"734fc46bf2d6cb0efab0a421c2383ca7f5ac5d5f26f09dca87e8822160e6369a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85ff1c6d31daf67b115b3ecc273124c4088e43fb092672314efcf922406a9b99\"" Jan 13 20:55:06.081437 containerd[1454]: time="2025-01-13T20:55:06.081289429Z" level=info msg="StartContainer for \"85ff1c6d31daf67b115b3ecc273124c4088e43fb092672314efcf922406a9b99\"" Jan 13 20:55:06.130648 systemd[1]: Started cri-containerd-85ff1c6d31daf67b115b3ecc273124c4088e43fb092672314efcf922406a9b99.scope - libcontainer container 85ff1c6d31daf67b115b3ecc273124c4088e43fb092672314efcf922406a9b99. Jan 13 20:55:06.189703 containerd[1454]: time="2025-01-13T20:55:06.189173655Z" level=info msg="StartContainer for \"85ff1c6d31daf67b115b3ecc273124c4088e43fb092672314efcf922406a9b99\" returns successfully" Jan 13 20:55:06.989762 kubelet[2639]: I0113 20:55:06.989601 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cp8fx" podStartSLOduration=2.98952228 podStartE2EDuration="2.98952228s" podCreationTimestamp="2025-01-13 20:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:55:06.989357315 +0000 UTC m=+15.319866891" watchObservedRunningTime="2025-01-13 20:55:06.98952228 +0000 UTC m=+15.320031855" Jan 13 20:55:07.159581 containerd[1454]: time="2025-01-13T20:55:07.159514191Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:55:07.160773 containerd[1454]: time="2025-01-13T20:55:07.160716217Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907173" Jan 13 20:55:07.162392 containerd[1454]: time="2025-01-13T20:55:07.161987256Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:55:07.164504 containerd[1454]: time="2025-01-13T20:55:07.164310281Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.977059924s" Jan 13 20:55:07.164504 containerd[1454]: time="2025-01-13T20:55:07.164355768Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:55:07.167037 containerd[1454]: time="2025-01-13T20:55:07.166737760Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:55:07.170013 containerd[1454]: time="2025-01-13T20:55:07.169800879Z" level=info msg="CreateContainer within sandbox \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:55:07.189623 containerd[1454]: time="2025-01-13T20:55:07.189581868Z" level=info msg="CreateContainer within sandbox \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\"" Jan 13 20:55:07.191757 containerd[1454]: time="2025-01-13T20:55:07.190578792Z" level=info msg="StartContainer for \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\"" Jan 13 20:55:07.240638 systemd[1]: Started cri-containerd-e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8.scope - libcontainer container e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8. Jan 13 20:55:07.275898 containerd[1454]: time="2025-01-13T20:55:07.275843736Z" level=info msg="StartContainer for \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\" returns successfully" Jan 13 20:55:11.865868 kubelet[2639]: I0113 20:55:11.865026 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-kjh4x" podStartSLOduration=5.883627583 podStartE2EDuration="7.864963194s" podCreationTimestamp="2025-01-13 20:55:04 +0000 UTC" firstStartedPulling="2025-01-13 20:55:05.184160883 +0000 UTC m=+13.514670441" lastFinishedPulling="2025-01-13 20:55:07.165496487 +0000 UTC m=+15.496006052" observedRunningTime="2025-01-13 20:55:08.033838045 +0000 UTC m=+16.364347617" watchObservedRunningTime="2025-01-13 20:55:11.864963194 +0000 UTC m=+20.195472766" Jan 13 20:55:23.380287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313321730.mount: Deactivated successfully. Jan 13 20:55:26.100508 containerd[1454]: time="2025-01-13T20:55:26.100433792Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:55:26.102079 containerd[1454]: time="2025-01-13T20:55:26.101917796Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166733527" Jan 13 20:55:26.103358 containerd[1454]: time="2025-01-13T20:55:26.103277674Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:55:26.106327 containerd[1454]: time="2025-01-13T20:55:26.105446974Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.938662399s" Jan 13 20:55:26.106327 containerd[1454]: time="2025-01-13T20:55:26.105493589Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:55:26.109030 containerd[1454]: time="2025-01-13T20:55:26.108992799Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:55:26.128607 containerd[1454]: time="2025-01-13T20:55:26.128554874Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\"" Jan 13 20:55:26.131972 containerd[1454]: time="2025-01-13T20:55:26.129730937Z" level=info msg="StartContainer for \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\"" Jan 13 20:55:26.129818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508841354.mount: Deactivated successfully. Jan 13 20:55:26.187653 systemd[1]: Started cri-containerd-97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3.scope - libcontainer container 97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3. Jan 13 20:55:26.225228 containerd[1454]: time="2025-01-13T20:55:26.225141128Z" level=info msg="StartContainer for \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\" returns successfully" Jan 13 20:55:26.242842 systemd[1]: cri-containerd-97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3.scope: Deactivated successfully. Jan 13 20:55:27.121110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3-rootfs.mount: Deactivated successfully. Jan 13 20:55:28.312315 containerd[1454]: time="2025-01-13T20:55:28.312099243Z" level=info msg="shim disconnected" id=97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3 namespace=k8s.io Jan 13 20:55:28.312315 containerd[1454]: time="2025-01-13T20:55:28.312175187Z" level=warning msg="cleaning up after shim disconnected" id=97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3 namespace=k8s.io Jan 13 20:55:28.312315 containerd[1454]: time="2025-01-13T20:55:28.312190977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:55:29.038841 containerd[1454]: time="2025-01-13T20:55:29.038788241Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:55:29.064009 containerd[1454]: time="2025-01-13T20:55:29.063795372Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\"" Jan 13 20:55:29.064675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396673442.mount: Deactivated successfully. Jan 13 20:55:29.066043 containerd[1454]: time="2025-01-13T20:55:29.066003335Z" level=info msg="StartContainer for \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\"" Jan 13 20:55:29.126683 systemd[1]: Started cri-containerd-27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee.scope - libcontainer container 27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee. Jan 13 20:55:29.165768 containerd[1454]: time="2025-01-13T20:55:29.165687362Z" level=info msg="StartContainer for \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\" returns successfully" Jan 13 20:55:29.182544 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:55:29.182946 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:55:29.183057 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:55:29.193000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:55:29.193381 systemd[1]: cri-containerd-27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee.scope: Deactivated successfully. Jan 13 20:55:29.232831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee-rootfs.mount: Deactivated successfully. Jan 13 20:55:29.235525 containerd[1454]: time="2025-01-13T20:55:29.235315334Z" level=info msg="shim disconnected" id=27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee namespace=k8s.io Jan 13 20:55:29.235525 containerd[1454]: time="2025-01-13T20:55:29.235456056Z" level=warning msg="cleaning up after shim disconnected" id=27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee namespace=k8s.io Jan 13 20:55:29.235525 containerd[1454]: time="2025-01-13T20:55:29.235475318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:55:29.235977 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:55:30.043794 containerd[1454]: time="2025-01-13T20:55:30.043198517Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:55:30.077964 containerd[1454]: time="2025-01-13T20:55:30.077904089Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\"" Jan 13 20:55:30.080317 containerd[1454]: time="2025-01-13T20:55:30.079134046Z" level=info msg="StartContainer for \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\"" Jan 13 20:55:30.081329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280031214.mount: Deactivated successfully. Jan 13 20:55:30.131645 systemd[1]: Started cri-containerd-e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7.scope - libcontainer container e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7. Jan 13 20:55:30.178337 containerd[1454]: time="2025-01-13T20:55:30.178238000Z" level=info msg="StartContainer for \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\" returns successfully" Jan 13 20:55:30.182664 systemd[1]: cri-containerd-e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7.scope: Deactivated successfully. Jan 13 20:55:30.216846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7-rootfs.mount: Deactivated successfully. Jan 13 20:55:30.220301 containerd[1454]: time="2025-01-13T20:55:30.220197612Z" level=info msg="shim disconnected" id=e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7 namespace=k8s.io Jan 13 20:55:30.220301 containerd[1454]: time="2025-01-13T20:55:30.220290967Z" level=warning msg="cleaning up after shim disconnected" id=e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7 namespace=k8s.io Jan 13 20:55:30.220653 containerd[1454]: time="2025-01-13T20:55:30.220305948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:55:31.055143 containerd[1454]: time="2025-01-13T20:55:31.054917728Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:55:31.083531 containerd[1454]: time="2025-01-13T20:55:31.082762091Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\"" Jan 13 20:55:31.087942 containerd[1454]: time="2025-01-13T20:55:31.086663223Z" level=info msg="StartContainer for \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\"" Jan 13 20:55:31.134324 systemd[1]: run-containerd-runc-k8s.io-c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f-runc.onpfSS.mount: Deactivated successfully. Jan 13 20:55:31.142612 systemd[1]: Started cri-containerd-c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f.scope - libcontainer container c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f. Jan 13 20:55:31.178184 systemd[1]: cri-containerd-c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f.scope: Deactivated successfully. Jan 13 20:55:31.181859 containerd[1454]: time="2025-01-13T20:55:31.181810543Z" level=info msg="StartContainer for \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\" returns successfully" Jan 13 20:55:31.214636 containerd[1454]: time="2025-01-13T20:55:31.214548888Z" level=info msg="shim disconnected" id=c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f namespace=k8s.io Jan 13 20:55:31.214636 containerd[1454]: time="2025-01-13T20:55:31.214634041Z" level=warning msg="cleaning up after shim disconnected" id=c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f namespace=k8s.io Jan 13 20:55:31.215104 containerd[1454]: time="2025-01-13T20:55:31.214650326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:55:32.058553 containerd[1454]: time="2025-01-13T20:55:32.058491524Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:55:32.071973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f-rootfs.mount: Deactivated successfully. Jan 13 20:55:32.093005 containerd[1454]: time="2025-01-13T20:55:32.092825346Z" level=info msg="CreateContainer within sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\"" Jan 13 20:55:32.096437 containerd[1454]: time="2025-01-13T20:55:32.095974940Z" level=info msg="StartContainer for \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\"" Jan 13 20:55:32.139332 systemd[1]: run-containerd-runc-k8s.io-1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d-runc.dyApDN.mount: Deactivated successfully. Jan 13 20:55:32.149636 systemd[1]: Started cri-containerd-1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d.scope - libcontainer container 1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d. Jan 13 20:55:32.198864 containerd[1454]: time="2025-01-13T20:55:32.198815734Z" level=info msg="StartContainer for \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\" returns successfully" Jan 13 20:55:32.323374 kubelet[2639]: I0113 20:55:32.321976 2639 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:55:32.358541 kubelet[2639]: I0113 20:55:32.358485 2639 topology_manager.go:215] "Topology Admit Handler" podUID="56fe2324-34c1-4396-970e-8a748da075a6" podNamespace="kube-system" podName="coredns-76f75df574-62mtq" Jan 13 20:55:32.365432 kubelet[2639]: I0113 20:55:32.365002 2639 topology_manager.go:215] "Topology Admit Handler" podUID="ff059bf6-850c-48a4-acc6-10a7bbb0a30b" podNamespace="kube-system" podName="coredns-76f75df574-jdkrr" Jan 13 20:55:32.374245 systemd[1]: Created slice kubepods-burstable-pod56fe2324_34c1_4396_970e_8a748da075a6.slice - libcontainer container kubepods-burstable-pod56fe2324_34c1_4396_970e_8a748da075a6.slice. Jan 13 20:55:32.387961 systemd[1]: Created slice kubepods-burstable-podff059bf6_850c_48a4_acc6_10a7bbb0a30b.slice - libcontainer container kubepods-burstable-podff059bf6_850c_48a4_acc6_10a7bbb0a30b.slice. Jan 13 20:55:32.503986 kubelet[2639]: I0113 20:55:32.503612 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff059bf6-850c-48a4-acc6-10a7bbb0a30b-config-volume\") pod \"coredns-76f75df574-jdkrr\" (UID: \"ff059bf6-850c-48a4-acc6-10a7bbb0a30b\") " pod="kube-system/coredns-76f75df574-jdkrr" Jan 13 20:55:32.503986 kubelet[2639]: I0113 20:55:32.503694 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrplz\" (UniqueName: \"kubernetes.io/projected/56fe2324-34c1-4396-970e-8a748da075a6-kube-api-access-lrplz\") pod \"coredns-76f75df574-62mtq\" (UID: \"56fe2324-34c1-4396-970e-8a748da075a6\") " pod="kube-system/coredns-76f75df574-62mtq" Jan 13 20:55:32.503986 kubelet[2639]: I0113 20:55:32.503736 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47ln\" (UniqueName: \"kubernetes.io/projected/ff059bf6-850c-48a4-acc6-10a7bbb0a30b-kube-api-access-m47ln\") pod \"coredns-76f75df574-jdkrr\" (UID: \"ff059bf6-850c-48a4-acc6-10a7bbb0a30b\") " pod="kube-system/coredns-76f75df574-jdkrr" Jan 13 20:55:32.503986 kubelet[2639]: I0113 20:55:32.503776 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56fe2324-34c1-4396-970e-8a748da075a6-config-volume\") pod \"coredns-76f75df574-62mtq\" (UID: \"56fe2324-34c1-4396-970e-8a748da075a6\") " pod="kube-system/coredns-76f75df574-62mtq" Jan 13 20:55:32.684940 containerd[1454]: time="2025-01-13T20:55:32.684758021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-62mtq,Uid:56fe2324-34c1-4396-970e-8a748da075a6,Namespace:kube-system,Attempt:0,}" Jan 13 20:55:32.696339 containerd[1454]: time="2025-01-13T20:55:32.696285815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jdkrr,Uid:ff059bf6-850c-48a4-acc6-10a7bbb0a30b,Namespace:kube-system,Attempt:0,}" Jan 13 20:55:34.510148 systemd-networkd[1366]: cilium_host: Link UP Jan 13 20:55:34.510477 systemd-networkd[1366]: cilium_net: Link UP Jan 13 20:55:34.512053 systemd-networkd[1366]: cilium_net: Gained carrier Jan 13 20:55:34.512897 systemd-networkd[1366]: cilium_host: Gained carrier Jan 13 20:55:34.659792 systemd-networkd[1366]: cilium_vxlan: Link UP Jan 13 20:55:34.660084 systemd-networkd[1366]: cilium_vxlan: Gained carrier Jan 13 20:55:34.963446 kernel: NET: Registered PF_ALG protocol family Jan 13 20:55:35.093795 systemd-networkd[1366]: cilium_net: Gained IPv6LL Jan 13 20:55:35.541642 systemd-networkd[1366]: cilium_host: Gained IPv6LL Jan 13 20:55:35.821906 systemd-networkd[1366]: lxc_health: Link UP Jan 13 20:55:35.836647 systemd-networkd[1366]: lxc_health: Gained carrier Jan 13 20:55:36.272708 kernel: eth0: renamed from tmp6d340 Jan 13 20:55:36.281864 systemd-networkd[1366]: lxcad6dc849b1ab: Link UP Jan 13 20:55:36.282828 systemd-networkd[1366]: lxcad6dc849b1ab: Gained carrier Jan 13 20:55:36.309246 systemd-networkd[1366]: lxcb7d9384a7962: Link UP Jan 13 20:55:36.317611 kernel: eth0: renamed from tmp2e2b8 Jan 13 20:55:36.326699 systemd-networkd[1366]: lxcb7d9384a7962: Gained carrier Jan 13 20:55:36.439138 systemd-networkd[1366]: cilium_vxlan: Gained IPv6LL Jan 13 20:55:37.109302 kubelet[2639]: I0113 20:55:37.109252 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t24nc" podStartSLOduration=12.205561476 podStartE2EDuration="33.109181409s" podCreationTimestamp="2025-01-13 20:55:04 +0000 UTC" firstStartedPulling="2025-01-13 20:55:05.202185912 +0000 UTC m=+13.532695472" lastFinishedPulling="2025-01-13 20:55:26.105805847 +0000 UTC m=+34.436315405" observedRunningTime="2025-01-13 20:55:33.09200224 +0000 UTC m=+41.422511813" watchObservedRunningTime="2025-01-13 20:55:37.109181409 +0000 UTC m=+45.439690981" Jan 13 20:55:37.362562 systemd[1]: Started sshd@7-10.128.0.13:22-147.75.109.163:34258.service - OpenSSH per-connection server daemon (147.75.109.163:34258). Jan 13 20:55:37.398127 systemd-networkd[1366]: lxcb7d9384a7962: Gained IPv6LL Jan 13 20:55:37.590111 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 13 20:55:37.713618 sshd[3845]: Accepted publickey for core from 147.75.109.163 port 34258 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:55:37.714479 sshd-session[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:55:37.724947 systemd-logind[1441]: New session 8 of user core. Jan 13 20:55:37.731024 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:55:38.038701 systemd-networkd[1366]: lxcad6dc849b1ab: Gained IPv6LL Jan 13 20:55:38.093864 sshd[3850]: Connection closed by 147.75.109.163 port 34258 Jan 13 20:55:38.094782 sshd-session[3845]: pam_unix(sshd:session): session closed for user core Jan 13 20:55:38.101492 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:55:38.105334 systemd[1]: sshd@7-10.128.0.13:22-147.75.109.163:34258.service: Deactivated successfully. Jan 13 20:55:38.109382 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:55:38.111746 systemd-logind[1441]: Removed session 8. Jan 13 20:55:40.420290 ntpd[1423]: Listen normally on 7 cilium_host 192.168.0.122:123 Jan 13 20:55:40.421605 ntpd[1423]: 13 Jan 20:55:40 ntpd[1423]: Listen normally on 7 cilium_host 192.168.0.122:123 Jan 13 20:55:40.421605 ntpd[1423]: 13 Jan 20:55:40 ntpd[1423]: Listen normally on 8 cilium_net [fe80::fc80:73ff:fe29:5f0f%4]:123 Jan 13 20:55:40.421605 ntpd[1423]: 13 Jan 20:55:40 ntpd[1423]: Listen normally on 9 cilium_host [fe80::d826:a1ff:fe0e:2ebb%5]:123 Jan 13 20:55:40.421605 ntpd[1423]: 13 Jan 20:55:40 ntpd[1423]: Listen normally on 10 cilium_vxlan [fe80::bc14:14ff:fe1d:cfd8%6]:123 Jan 13 20:55:40.421605 ntpd[1423]: 13 Jan 20:55:40 ntpd[1423]: Listen normally on 11 lxc_health [fe80::6435:3cff:fe77:e29c%8]:123 Jan 13 20:55:40.421605 ntpd[1423]: 13 Jan 20:55:40 ntpd[1423]: Listen normally on 12 lxcad6dc849b1ab [fe80::6494:95ff:fe2d:638c%10]:123 Jan 13 20:55:40.421605 ntpd[1423]: 13 Jan 20:55:40 ntpd[1423]: Listen normally on 13 lxcb7d9384a7962 [fe80::8814:c1ff:fedd:881b%12]:123 Jan 13 20:55:40.420648 ntpd[1423]: Listen normally on 8 cilium_net [fe80::fc80:73ff:fe29:5f0f%4]:123 Jan 13 20:55:40.420763 ntpd[1423]: Listen normally on 9 cilium_host [fe80::d826:a1ff:fe0e:2ebb%5]:123 Jan 13 20:55:40.420832 ntpd[1423]: Listen normally on 10 cilium_vxlan [fe80::bc14:14ff:fe1d:cfd8%6]:123 Jan 13 20:55:40.420889 ntpd[1423]: Listen normally on 11 lxc_health [fe80::6435:3cff:fe77:e29c%8]:123 Jan 13 20:55:40.420945 ntpd[1423]: Listen normally on 12 lxcad6dc849b1ab [fe80::6494:95ff:fe2d:638c%10]:123 Jan 13 20:55:40.421014 ntpd[1423]: Listen normally on 13 lxcb7d9384a7962 [fe80::8814:c1ff:fedd:881b%12]:123 Jan 13 20:55:41.683948 containerd[1454]: time="2025-01-13T20:55:41.683452670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:55:41.683948 containerd[1454]: time="2025-01-13T20:55:41.683546000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:55:41.683948 containerd[1454]: time="2025-01-13T20:55:41.683569512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:41.683948 containerd[1454]: time="2025-01-13T20:55:41.683743222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:41.707440 containerd[1454]: time="2025-01-13T20:55:41.705459176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:55:41.707440 containerd[1454]: time="2025-01-13T20:55:41.705540011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:55:41.707440 containerd[1454]: time="2025-01-13T20:55:41.705563420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:41.707440 containerd[1454]: time="2025-01-13T20:55:41.705688330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:55:41.757985 systemd[1]: Started cri-containerd-6d340e94126f0115b70cbae6a01129193af8a9f768861e7b33a364f1ec18d966.scope - libcontainer container 6d340e94126f0115b70cbae6a01129193af8a9f768861e7b33a364f1ec18d966. Jan 13 20:55:41.785717 systemd[1]: Started cri-containerd-2e2b8344fd157ea9b1b0f598fe639b1cf2d34bee0fe0a39e0dba9b9e2c092bdc.scope - libcontainer container 2e2b8344fd157ea9b1b0f598fe639b1cf2d34bee0fe0a39e0dba9b9e2c092bdc. Jan 13 20:55:41.929268 containerd[1454]: time="2025-01-13T20:55:41.928900821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jdkrr,Uid:ff059bf6-850c-48a4-acc6-10a7bbb0a30b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2b8344fd157ea9b1b0f598fe639b1cf2d34bee0fe0a39e0dba9b9e2c092bdc\"" Jan 13 20:55:41.940940 containerd[1454]: time="2025-01-13T20:55:41.940806659Z" level=info msg="CreateContainer within sandbox \"2e2b8344fd157ea9b1b0f598fe639b1cf2d34bee0fe0a39e0dba9b9e2c092bdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:55:41.947252 containerd[1454]: time="2025-01-13T20:55:41.946598971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-62mtq,Uid:56fe2324-34c1-4396-970e-8a748da075a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d340e94126f0115b70cbae6a01129193af8a9f768861e7b33a364f1ec18d966\"" Jan 13 20:55:41.952680 containerd[1454]: time="2025-01-13T20:55:41.952528116Z" level=info msg="CreateContainer within sandbox \"6d340e94126f0115b70cbae6a01129193af8a9f768861e7b33a364f1ec18d966\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:55:41.986091 containerd[1454]: time="2025-01-13T20:55:41.984595064Z" level=info msg="CreateContainer within sandbox \"6d340e94126f0115b70cbae6a01129193af8a9f768861e7b33a364f1ec18d966\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fd16169b80f6a216b30d39a6f7d685ec2d88e0b940de13702d4a11823d63bed\"" Jan 13 20:55:41.986658 containerd[1454]: time="2025-01-13T20:55:41.986626237Z" level=info msg="StartContainer for \"3fd16169b80f6a216b30d39a6f7d685ec2d88e0b940de13702d4a11823d63bed\"" Jan 13 20:55:41.991827 containerd[1454]: time="2025-01-13T20:55:41.991176728Z" level=info msg="CreateContainer within sandbox \"2e2b8344fd157ea9b1b0f598fe639b1cf2d34bee0fe0a39e0dba9b9e2c092bdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dbf03eaa157dcf947ace622ebd6b1428f1a05dc5ded2eb12c437237d07ee2e7\"" Jan 13 20:55:41.993752 containerd[1454]: time="2025-01-13T20:55:41.992870593Z" level=info msg="StartContainer for \"3dbf03eaa157dcf947ace622ebd6b1428f1a05dc5ded2eb12c437237d07ee2e7\"" Jan 13 20:55:42.044898 systemd[1]: Started cri-containerd-3dbf03eaa157dcf947ace622ebd6b1428f1a05dc5ded2eb12c437237d07ee2e7.scope - libcontainer container 3dbf03eaa157dcf947ace622ebd6b1428f1a05dc5ded2eb12c437237d07ee2e7. Jan 13 20:55:42.055669 systemd[1]: Started cri-containerd-3fd16169b80f6a216b30d39a6f7d685ec2d88e0b940de13702d4a11823d63bed.scope - libcontainer container 3fd16169b80f6a216b30d39a6f7d685ec2d88e0b940de13702d4a11823d63bed. Jan 13 20:55:42.112730 containerd[1454]: time="2025-01-13T20:55:42.112663114Z" level=info msg="StartContainer for \"3dbf03eaa157dcf947ace622ebd6b1428f1a05dc5ded2eb12c437237d07ee2e7\" returns successfully" Jan 13 20:55:42.124482 containerd[1454]: time="2025-01-13T20:55:42.124401626Z" level=info msg="StartContainer for \"3fd16169b80f6a216b30d39a6f7d685ec2d88e0b940de13702d4a11823d63bed\" returns successfully" Jan 13 20:55:42.698025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128840534.mount: Deactivated successfully. Jan 13 20:55:43.160947 systemd[1]: Started sshd@8-10.128.0.13:22-147.75.109.163:41328.service - OpenSSH per-connection server daemon (147.75.109.163:41328). Jan 13 20:55:43.164018 kubelet[2639]: I0113 20:55:43.163909 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-62mtq" podStartSLOduration=39.16384014 podStartE2EDuration="39.16384014s" podCreationTimestamp="2025-01-13 20:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:55:43.128144146 +0000 UTC m=+51.458653717" watchObservedRunningTime="2025-01-13 20:55:43.16384014 +0000 UTC m=+51.494349711" Jan 13 20:55:43.495942 sshd[4029]: Accepted publickey for core from 147.75.109.163 port 41328 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:55:43.498090 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:55:43.504539 systemd-logind[1441]: New session 9 of user core. Jan 13 20:55:43.510636 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:55:43.784111 sshd[4038]: Connection closed by 147.75.109.163 port 41328 Jan 13 20:55:43.785297 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Jan 13 20:55:43.790576 systemd[1]: sshd@8-10.128.0.13:22-147.75.109.163:41328.service: Deactivated successfully. Jan 13 20:55:43.793331 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:55:43.794675 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:55:43.796252 systemd-logind[1441]: Removed session 9. Jan 13 20:55:48.843854 systemd[1]: Started sshd@9-10.128.0.13:22-147.75.109.163:54806.service - OpenSSH per-connection server daemon (147.75.109.163:54806). Jan 13 20:55:49.134325 sshd[4050]: Accepted publickey for core from 147.75.109.163 port 54806 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:55:49.136092 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:55:49.142909 systemd-logind[1441]: New session 10 of user core. Jan 13 20:55:49.145629 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:55:49.429463 sshd[4052]: Connection closed by 147.75.109.163 port 54806 Jan 13 20:55:49.430681 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Jan 13 20:55:49.435028 systemd[1]: sshd@9-10.128.0.13:22-147.75.109.163:54806.service: Deactivated successfully. Jan 13 20:55:49.437955 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:55:49.440048 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:55:49.441728 systemd-logind[1441]: Removed session 10. Jan 13 20:55:54.485787 systemd[1]: Started sshd@10-10.128.0.13:22-147.75.109.163:54816.service - OpenSSH per-connection server daemon (147.75.109.163:54816). Jan 13 20:55:54.785911 sshd[4066]: Accepted publickey for core from 147.75.109.163 port 54816 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:55:54.787720 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:55:54.793475 systemd-logind[1441]: New session 11 of user core. Jan 13 20:55:54.800644 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:55:55.071838 sshd[4068]: Connection closed by 147.75.109.163 port 54816 Jan 13 20:55:55.073012 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Jan 13 20:55:55.077826 systemd[1]: sshd@10-10.128.0.13:22-147.75.109.163:54816.service: Deactivated successfully. Jan 13 20:55:55.080544 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:55:55.082533 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:55:55.084201 systemd-logind[1441]: Removed session 11. Jan 13 20:56:00.130816 systemd[1]: Started sshd@11-10.128.0.13:22-147.75.109.163:56944.service - OpenSSH per-connection server daemon (147.75.109.163:56944). Jan 13 20:56:00.428348 sshd[4080]: Accepted publickey for core from 147.75.109.163 port 56944 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:00.430115 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:00.436738 systemd-logind[1441]: New session 12 of user core. Jan 13 20:56:00.451693 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:56:00.715039 sshd[4082]: Connection closed by 147.75.109.163 port 56944 Jan 13 20:56:00.715989 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:00.720470 systemd[1]: sshd@11-10.128.0.13:22-147.75.109.163:56944.service: Deactivated successfully. Jan 13 20:56:00.723293 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:56:00.725400 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:56:00.727081 systemd-logind[1441]: Removed session 12. Jan 13 20:56:05.771825 systemd[1]: Started sshd@12-10.128.0.13:22-147.75.109.163:56952.service - OpenSSH per-connection server daemon (147.75.109.163:56952). Jan 13 20:56:06.064746 sshd[4094]: Accepted publickey for core from 147.75.109.163 port 56952 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:06.066516 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:06.073575 systemd-logind[1441]: New session 13 of user core. Jan 13 20:56:06.079670 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:56:06.356702 sshd[4096]: Connection closed by 147.75.109.163 port 56952 Jan 13 20:56:06.358019 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:06.363669 systemd[1]: sshd@12-10.128.0.13:22-147.75.109.163:56952.service: Deactivated successfully. Jan 13 20:56:06.366477 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:56:06.367656 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:56:06.369383 systemd-logind[1441]: Removed session 13. Jan 13 20:56:11.413746 systemd[1]: Started sshd@13-10.128.0.13:22-147.75.109.163:40252.service - OpenSSH per-connection server daemon (147.75.109.163:40252). Jan 13 20:56:11.711715 sshd[4110]: Accepted publickey for core from 147.75.109.163 port 40252 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:11.713723 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:11.719487 systemd-logind[1441]: New session 14 of user core. Jan 13 20:56:11.726611 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:56:12.001953 sshd[4113]: Connection closed by 147.75.109.163 port 40252 Jan 13 20:56:12.003277 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:12.007452 systemd[1]: sshd@13-10.128.0.13:22-147.75.109.163:40252.service: Deactivated successfully. Jan 13 20:56:12.010030 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:56:12.012029 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:56:12.013933 systemd-logind[1441]: Removed session 14. Jan 13 20:56:12.058860 systemd[1]: Started sshd@14-10.128.0.13:22-147.75.109.163:40258.service - OpenSSH per-connection server daemon (147.75.109.163:40258). Jan 13 20:56:12.359738 sshd[4125]: Accepted publickey for core from 147.75.109.163 port 40258 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:12.361577 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:12.368149 systemd-logind[1441]: New session 15 of user core. Jan 13 20:56:12.372658 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:56:12.687511 sshd[4128]: Connection closed by 147.75.109.163 port 40258 Jan 13 20:56:12.687722 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:12.696007 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:56:12.697089 systemd[1]: sshd@14-10.128.0.13:22-147.75.109.163:40258.service: Deactivated successfully. Jan 13 20:56:12.703038 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:56:12.707221 systemd-logind[1441]: Removed session 15. Jan 13 20:56:12.737159 systemd[1]: Started sshd@15-10.128.0.13:22-147.75.109.163:40266.service - OpenSSH per-connection server daemon (147.75.109.163:40266). Jan 13 20:56:13.039287 sshd[4136]: Accepted publickey for core from 147.75.109.163 port 40266 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:13.041135 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:13.050455 systemd-logind[1441]: New session 16 of user core. Jan 13 20:56:13.057663 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:56:13.327565 sshd[4138]: Connection closed by 147.75.109.163 port 40266 Jan 13 20:56:13.328511 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:13.333929 systemd[1]: sshd@15-10.128.0.13:22-147.75.109.163:40266.service: Deactivated successfully. Jan 13 20:56:13.336671 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:56:13.337929 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:56:13.339382 systemd-logind[1441]: Removed session 16. Jan 13 20:56:18.386834 systemd[1]: Started sshd@16-10.128.0.13:22-147.75.109.163:42556.service - OpenSSH per-connection server daemon (147.75.109.163:42556). Jan 13 20:56:18.686030 sshd[4149]: Accepted publickey for core from 147.75.109.163 port 42556 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:18.687876 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:18.693916 systemd-logind[1441]: New session 17 of user core. Jan 13 20:56:18.700650 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:56:18.974475 sshd[4151]: Connection closed by 147.75.109.163 port 42556 Jan 13 20:56:18.975601 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:18.979776 systemd[1]: sshd@16-10.128.0.13:22-147.75.109.163:42556.service: Deactivated successfully. Jan 13 20:56:18.983151 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:56:18.985677 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:56:18.987299 systemd-logind[1441]: Removed session 17. Jan 13 20:56:24.034842 systemd[1]: Started sshd@17-10.128.0.13:22-147.75.109.163:42558.service - OpenSSH per-connection server daemon (147.75.109.163:42558). Jan 13 20:56:24.329542 sshd[4161]: Accepted publickey for core from 147.75.109.163 port 42558 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:24.331309 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:24.337907 systemd-logind[1441]: New session 18 of user core. Jan 13 20:56:24.349674 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:56:24.623831 sshd[4163]: Connection closed by 147.75.109.163 port 42558 Jan 13 20:56:24.624749 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:24.630887 systemd[1]: sshd@17-10.128.0.13:22-147.75.109.163:42558.service: Deactivated successfully. Jan 13 20:56:24.633613 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:56:24.634874 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:56:24.636322 systemd-logind[1441]: Removed session 18. Jan 13 20:56:24.683811 systemd[1]: Started sshd@18-10.128.0.13:22-147.75.109.163:42568.service - OpenSSH per-connection server daemon (147.75.109.163:42568). Jan 13 20:56:24.975978 sshd[4174]: Accepted publickey for core from 147.75.109.163 port 42568 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:24.977818 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:24.984450 systemd-logind[1441]: New session 19 of user core. Jan 13 20:56:24.990951 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:56:25.328203 sshd[4176]: Connection closed by 147.75.109.163 port 42568 Jan 13 20:56:25.329119 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:25.334860 systemd[1]: sshd@18-10.128.0.13:22-147.75.109.163:42568.service: Deactivated successfully. Jan 13 20:56:25.337626 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:56:25.338744 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:56:25.340306 systemd-logind[1441]: Removed session 19. Jan 13 20:56:25.386783 systemd[1]: Started sshd@19-10.128.0.13:22-147.75.109.163:42570.service - OpenSSH per-connection server daemon (147.75.109.163:42570). Jan 13 20:56:25.678287 sshd[4184]: Accepted publickey for core from 147.75.109.163 port 42570 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:25.680386 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:25.686041 systemd-logind[1441]: New session 20 of user core. Jan 13 20:56:25.693632 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:56:27.410066 sshd[4186]: Connection closed by 147.75.109.163 port 42570 Jan 13 20:56:27.411324 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:27.416932 systemd[1]: sshd@19-10.128.0.13:22-147.75.109.163:42570.service: Deactivated successfully. Jan 13 20:56:27.420137 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:56:27.421468 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:56:27.423014 systemd-logind[1441]: Removed session 20. Jan 13 20:56:27.467829 systemd[1]: Started sshd@20-10.128.0.13:22-147.75.109.163:41154.service - OpenSSH per-connection server daemon (147.75.109.163:41154). Jan 13 20:56:27.759561 sshd[4202]: Accepted publickey for core from 147.75.109.163 port 41154 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:27.761293 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:27.766955 systemd-logind[1441]: New session 21 of user core. Jan 13 20:56:27.773635 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:56:28.201768 sshd[4204]: Connection closed by 147.75.109.163 port 41154 Jan 13 20:56:28.202619 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:28.207964 systemd[1]: sshd@20-10.128.0.13:22-147.75.109.163:41154.service: Deactivated successfully. Jan 13 20:56:28.211190 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:56:28.213339 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:56:28.215245 systemd-logind[1441]: Removed session 21. Jan 13 20:56:28.258199 systemd[1]: Started sshd@21-10.128.0.13:22-147.75.109.163:41158.service - OpenSSH per-connection server daemon (147.75.109.163:41158). Jan 13 20:56:28.557048 sshd[4213]: Accepted publickey for core from 147.75.109.163 port 41158 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:28.558777 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:28.565371 systemd-logind[1441]: New session 22 of user core. Jan 13 20:56:28.571594 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:56:28.848432 sshd[4215]: Connection closed by 147.75.109.163 port 41158 Jan 13 20:56:28.849301 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:28.855268 systemd[1]: sshd@21-10.128.0.13:22-147.75.109.163:41158.service: Deactivated successfully. Jan 13 20:56:28.858081 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:56:28.859176 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:56:28.860714 systemd-logind[1441]: Removed session 22. Jan 13 20:56:33.913814 systemd[1]: Started sshd@22-10.128.0.13:22-147.75.109.163:41162.service - OpenSSH per-connection server daemon (147.75.109.163:41162). Jan 13 20:56:34.205488 sshd[4229]: Accepted publickey for core from 147.75.109.163 port 41162 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:34.207228 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:34.213529 systemd-logind[1441]: New session 23 of user core. Jan 13 20:56:34.219038 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:56:34.492638 sshd[4231]: Connection closed by 147.75.109.163 port 41162 Jan 13 20:56:34.493487 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:34.497807 systemd[1]: sshd@22-10.128.0.13:22-147.75.109.163:41162.service: Deactivated successfully. Jan 13 20:56:34.500815 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:56:34.502798 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:56:34.504809 systemd-logind[1441]: Removed session 23. Jan 13 20:56:39.550797 systemd[1]: Started sshd@23-10.128.0.13:22-147.75.109.163:59206.service - OpenSSH per-connection server daemon (147.75.109.163:59206). Jan 13 20:56:39.840998 sshd[4246]: Accepted publickey for core from 147.75.109.163 port 59206 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:39.842743 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:39.849818 systemd-logind[1441]: New session 24 of user core. Jan 13 20:56:39.857655 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:56:40.132258 sshd[4248]: Connection closed by 147.75.109.163 port 59206 Jan 13 20:56:40.133624 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:40.139005 systemd[1]: sshd@23-10.128.0.13:22-147.75.109.163:59206.service: Deactivated successfully. Jan 13 20:56:40.141544 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:56:40.142762 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:56:40.144244 systemd-logind[1441]: Removed session 24. Jan 13 20:56:45.188825 systemd[1]: Started sshd@24-10.128.0.13:22-147.75.109.163:59214.service - OpenSSH per-connection server daemon (147.75.109.163:59214). Jan 13 20:56:45.480267 sshd[4258]: Accepted publickey for core from 147.75.109.163 port 59214 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:45.482125 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:45.488755 systemd-logind[1441]: New session 25 of user core. Jan 13 20:56:45.491709 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:56:45.764577 sshd[4260]: Connection closed by 147.75.109.163 port 59214 Jan 13 20:56:45.765772 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:45.771778 systemd[1]: sshd@24-10.128.0.13:22-147.75.109.163:59214.service: Deactivated successfully. Jan 13 20:56:45.774315 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:56:45.775485 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:56:45.777200 systemd-logind[1441]: Removed session 25. Jan 13 20:56:45.823190 systemd[1]: Started sshd@25-10.128.0.13:22-147.75.109.163:59220.service - OpenSSH per-connection server daemon (147.75.109.163:59220). Jan 13 20:56:46.123798 sshd[4271]: Accepted publickey for core from 147.75.109.163 port 59220 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:46.125657 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:46.132167 systemd-logind[1441]: New session 26 of user core. Jan 13 20:56:46.138658 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:56:48.500439 kubelet[2639]: I0113 20:56:48.498201 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jdkrr" podStartSLOduration=104.498135693 podStartE2EDuration="1m44.498135693s" podCreationTimestamp="2025-01-13 20:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:55:43.203375399 +0000 UTC m=+51.533884971" watchObservedRunningTime="2025-01-13 20:56:48.498135693 +0000 UTC m=+116.828645266" Jan 13 20:56:48.517960 containerd[1454]: time="2025-01-13T20:56:48.517883950Z" level=info msg="StopContainer for \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\" with timeout 30 (s)" Jan 13 20:56:48.520731 containerd[1454]: time="2025-01-13T20:56:48.519447646Z" level=info msg="Stop container \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\" with signal terminated" Jan 13 20:56:48.546130 systemd[1]: cri-containerd-e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8.scope: Deactivated successfully. Jan 13 20:56:48.563232 containerd[1454]: time="2025-01-13T20:56:48.563176884Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:56:48.574158 containerd[1454]: time="2025-01-13T20:56:48.574115620Z" level=info msg="StopContainer for \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\" with timeout 2 (s)" Jan 13 20:56:48.575091 containerd[1454]: time="2025-01-13T20:56:48.575055174Z" level=info msg="Stop container \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\" with signal terminated" Jan 13 20:56:48.596230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8-rootfs.mount: Deactivated successfully. Jan 13 20:56:48.597990 systemd-networkd[1366]: lxc_health: Link DOWN Jan 13 20:56:48.598003 systemd-networkd[1366]: lxc_health: Lost carrier Jan 13 20:56:48.605286 containerd[1454]: time="2025-01-13T20:56:48.604975384Z" level=info msg="shim disconnected" id=e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8 namespace=k8s.io Jan 13 20:56:48.605286 containerd[1454]: time="2025-01-13T20:56:48.605046808Z" level=warning msg="cleaning up after shim disconnected" id=e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8 namespace=k8s.io Jan 13 20:56:48.605286 containerd[1454]: time="2025-01-13T20:56:48.605062776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:48.627914 systemd[1]: cri-containerd-1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d.scope: Deactivated successfully. Jan 13 20:56:48.628240 systemd[1]: cri-containerd-1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d.scope: Consumed 9.526s CPU time. Jan 13 20:56:48.643828 containerd[1454]: time="2025-01-13T20:56:48.642487908Z" level=info msg="StopContainer for \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\" returns successfully" Jan 13 20:56:48.644270 containerd[1454]: time="2025-01-13T20:56:48.644010136Z" level=info msg="StopPodSandbox for \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\"" Jan 13 20:56:48.644593 containerd[1454]: time="2025-01-13T20:56:48.644370775Z" level=info msg="Container to stop \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:56:48.651374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316-shm.mount: Deactivated successfully. Jan 13 20:56:48.664334 systemd[1]: cri-containerd-5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316.scope: Deactivated successfully. Jan 13 20:56:48.689780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d-rootfs.mount: Deactivated successfully. Jan 13 20:56:48.694674 containerd[1454]: time="2025-01-13T20:56:48.694104057Z" level=info msg="shim disconnected" id=1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d namespace=k8s.io Jan 13 20:56:48.694674 containerd[1454]: time="2025-01-13T20:56:48.694176285Z" level=warning msg="cleaning up after shim disconnected" id=1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d namespace=k8s.io Jan 13 20:56:48.694674 containerd[1454]: time="2025-01-13T20:56:48.694189108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:48.716291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316-rootfs.mount: Deactivated successfully. Jan 13 20:56:48.719702 containerd[1454]: time="2025-01-13T20:56:48.719219686Z" level=info msg="shim disconnected" id=5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316 namespace=k8s.io Jan 13 20:56:48.719702 containerd[1454]: time="2025-01-13T20:56:48.719303035Z" level=warning msg="cleaning up after shim disconnected" id=5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316 namespace=k8s.io Jan 13 20:56:48.719702 containerd[1454]: time="2025-01-13T20:56:48.719318717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:48.730690 containerd[1454]: time="2025-01-13T20:56:48.730239193Z" level=info msg="StopContainer for \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\" returns successfully" Jan 13 20:56:48.731431 containerd[1454]: time="2025-01-13T20:56:48.731388834Z" level=info msg="StopPodSandbox for \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\"" Jan 13 20:56:48.731635 containerd[1454]: time="2025-01-13T20:56:48.731542784Z" level=info msg="Container to stop \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:56:48.731635 containerd[1454]: time="2025-01-13T20:56:48.731638974Z" level=info msg="Container to stop \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:56:48.731787 containerd[1454]: time="2025-01-13T20:56:48.731656237Z" level=info msg="Container to stop \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:56:48.731787 containerd[1454]: time="2025-01-13T20:56:48.731671317Z" level=info msg="Container to stop \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:56:48.731787 containerd[1454]: time="2025-01-13T20:56:48.731686234Z" level=info msg="Container to stop \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:56:48.744003 systemd[1]: cri-containerd-439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503.scope: Deactivated successfully. Jan 13 20:56:48.748695 containerd[1454]: time="2025-01-13T20:56:48.748577227Z" level=info msg="TearDown network for sandbox \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" successfully" Jan 13 20:56:48.749376 containerd[1454]: time="2025-01-13T20:56:48.748813393Z" level=info msg="StopPodSandbox for \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" returns successfully" Jan 13 20:56:48.789849 containerd[1454]: time="2025-01-13T20:56:48.786146371Z" level=info msg="shim disconnected" id=439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503 namespace=k8s.io Jan 13 20:56:48.789849 containerd[1454]: time="2025-01-13T20:56:48.786213618Z" level=warning msg="cleaning up after shim disconnected" id=439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503 namespace=k8s.io Jan 13 20:56:48.789849 containerd[1454]: time="2025-01-13T20:56:48.786227534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:48.809077 containerd[1454]: time="2025-01-13T20:56:48.809012921Z" level=info msg="TearDown network for sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" successfully" Jan 13 20:56:48.809248 containerd[1454]: time="2025-01-13T20:56:48.809132583Z" level=info msg="StopPodSandbox for \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" returns successfully" Jan 13 20:56:48.881444 kubelet[2639]: I0113 20:56:48.879436 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7tbm\" (UniqueName: \"kubernetes.io/projected/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-kube-api-access-z7tbm\") pod \"8c1bab34-5e34-4a23-82fa-6cc20c2b86fc\" (UID: \"8c1bab34-5e34-4a23-82fa-6cc20c2b86fc\") " Jan 13 20:56:48.881444 kubelet[2639]: I0113 20:56:48.879513 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-cilium-config-path\") pod \"8c1bab34-5e34-4a23-82fa-6cc20c2b86fc\" (UID: \"8c1bab34-5e34-4a23-82fa-6cc20c2b86fc\") " Jan 13 20:56:48.883207 kubelet[2639]: I0113 20:56:48.883159 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c1bab34-5e34-4a23-82fa-6cc20c2b86fc" (UID: "8c1bab34-5e34-4a23-82fa-6cc20c2b86fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:56:48.884207 kubelet[2639]: I0113 20:56:48.884166 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-kube-api-access-z7tbm" (OuterVolumeSpecName: "kube-api-access-z7tbm") pod "8c1bab34-5e34-4a23-82fa-6cc20c2b86fc" (UID: "8c1bab34-5e34-4a23-82fa-6cc20c2b86fc"). InnerVolumeSpecName "kube-api-access-z7tbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:56:48.980879 kubelet[2639]: I0113 20:56:48.980823 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hostproc\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.981765 kubelet[2639]: I0113 20:56:48.980893 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cni-path\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.981765 kubelet[2639]: I0113 20:56:48.980933 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-run\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.981765 kubelet[2639]: I0113 20:56:48.980960 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-bpf-maps\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.981765 kubelet[2639]: I0113 20:56:48.980993 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hubble-tls\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.981765 kubelet[2639]: I0113 20:56:48.980998 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hostproc" (OuterVolumeSpecName: "hostproc") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.981765 kubelet[2639]: I0113 20:56:48.981022 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-etc-cni-netd\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982151 kubelet[2639]: I0113 20:56:48.981048 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-lib-modules\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982151 kubelet[2639]: I0113 20:56:48.981061 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.982151 kubelet[2639]: I0113 20:56:48.981092 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tx2w\" (UniqueName: \"kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-kube-api-access-8tx2w\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982151 kubelet[2639]: I0113 20:56:48.981111 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.982151 kubelet[2639]: I0113 20:56:48.981137 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-config-path\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982651 kubelet[2639]: I0113 20:56:48.981137 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.982651 kubelet[2639]: I0113 20:56:48.981169 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-cgroup\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982651 kubelet[2639]: I0113 20:56:48.981197 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-xtables-lock\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982651 kubelet[2639]: I0113 20:56:48.981234 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-clustermesh-secrets\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982651 kubelet[2639]: I0113 20:56:48.981280 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-net\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.982651 kubelet[2639]: I0113 20:56:48.981311 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-kernel\") pod \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\" (UID: \"0aaf1327-70f4-4727-a2d4-ff0db35bb2ae\") " Jan 13 20:56:48.983854 kubelet[2639]: I0113 20:56:48.981486 2639 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z7tbm\" (UniqueName: \"kubernetes.io/projected/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-kube-api-access-z7tbm\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:48.983854 kubelet[2639]: I0113 20:56:48.981512 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc-cilium-config-path\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:48.983854 kubelet[2639]: I0113 20:56:48.981533 2639 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hostproc\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:48.983854 kubelet[2639]: I0113 20:56:48.981555 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-run\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:48.983854 kubelet[2639]: I0113 20:56:48.981573 2639 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-bpf-maps\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:48.983854 kubelet[2639]: I0113 20:56:48.981593 2639 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-lib-modules\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:48.984543 kubelet[2639]: I0113 20:56:48.984481 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.984766 kubelet[2639]: I0113 20:56:48.981088 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cni-path" (OuterVolumeSpecName: "cni-path") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.985996 kubelet[2639]: I0113 20:56:48.985933 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.986213 kubelet[2639]: I0113 20:56:48.986112 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.986213 kubelet[2639]: I0113 20:56:48.986170 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.986213 kubelet[2639]: I0113 20:56:48.986201 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:56:48.987467 kubelet[2639]: I0113 20:56:48.987435 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-kube-api-access-8tx2w" (OuterVolumeSpecName: "kube-api-access-8tx2w") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "kube-api-access-8tx2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:56:48.990340 kubelet[2639]: I0113 20:56:48.990307 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:56:48.991638 kubelet[2639]: I0113 20:56:48.991605 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:56:48.992302 kubelet[2639]: I0113 20:56:48.992259 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" (UID: "0aaf1327-70f4-4727-a2d4-ff0db35bb2ae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:56:49.082879 kubelet[2639]: I0113 20:56:49.082827 2639 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-hubble-tls\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.082879 kubelet[2639]: I0113 20:56:49.082880 2639 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-etc-cni-netd\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.082879 kubelet[2639]: I0113 20:56:49.082903 2639 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8tx2w\" (UniqueName: \"kubernetes.io/projected/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-kube-api-access-8tx2w\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.083170 kubelet[2639]: I0113 20:56:49.082922 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-config-path\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.083170 kubelet[2639]: I0113 20:56:49.082939 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cilium-cgroup\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.083170 kubelet[2639]: I0113 20:56:49.082955 2639 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-xtables-lock\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.083170 kubelet[2639]: I0113 20:56:49.082974 2639 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-clustermesh-secrets\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.083170 kubelet[2639]: I0113 20:56:49.082992 2639 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-net\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.083170 kubelet[2639]: I0113 20:56:49.083008 2639 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-host-proc-sys-kernel\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.083170 kubelet[2639]: I0113 20:56:49.083026 2639 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae-cni-path\") on node \"ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 20:56:49.255425 kubelet[2639]: I0113 20:56:49.253592 2639 scope.go:117] "RemoveContainer" containerID="e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8" Jan 13 20:56:49.258726 containerd[1454]: time="2025-01-13T20:56:49.258673835Z" level=info msg="RemoveContainer for \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\"" Jan 13 20:56:49.264688 containerd[1454]: time="2025-01-13T20:56:49.264628147Z" level=info msg="RemoveContainer for \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\" returns successfully" Jan 13 20:56:49.265432 kubelet[2639]: I0113 20:56:49.265375 2639 scope.go:117] "RemoveContainer" containerID="e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8" Jan 13 20:56:49.265482 systemd[1]: Removed slice kubepods-besteffort-pod8c1bab34_5e34_4a23_82fa_6cc20c2b86fc.slice - libcontainer container kubepods-besteffort-pod8c1bab34_5e34_4a23_82fa_6cc20c2b86fc.slice. Jan 13 20:56:49.267090 containerd[1454]: time="2025-01-13T20:56:49.267045479Z" level=error msg="ContainerStatus for \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\": not found" Jan 13 20:56:49.267959 kubelet[2639]: E0113 20:56:49.267926 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\": not found" containerID="e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8" Jan 13 20:56:49.268073 kubelet[2639]: I0113 20:56:49.268058 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8"} err="failed to get container status \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e954e85f112228886af61ee8e99e99c21b6a2b3a4fe4a27d15517df203329fc8\": not found" Jan 13 20:56:49.268134 kubelet[2639]: I0113 20:56:49.268080 2639 scope.go:117] "RemoveContainer" containerID="1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d" Jan 13 20:56:49.274742 containerd[1454]: time="2025-01-13T20:56:49.274653706Z" level=info msg="RemoveContainer for \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\"" Jan 13 20:56:49.276558 systemd[1]: Removed slice kubepods-burstable-pod0aaf1327_70f4_4727_a2d4_ff0db35bb2ae.slice - libcontainer container kubepods-burstable-pod0aaf1327_70f4_4727_a2d4_ff0db35bb2ae.slice. Jan 13 20:56:49.276954 systemd[1]: kubepods-burstable-pod0aaf1327_70f4_4727_a2d4_ff0db35bb2ae.slice: Consumed 9.644s CPU time. Jan 13 20:56:49.281200 containerd[1454]: time="2025-01-13T20:56:49.281151046Z" level=info msg="RemoveContainer for \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\" returns successfully" Jan 13 20:56:49.281448 kubelet[2639]: I0113 20:56:49.281391 2639 scope.go:117] "RemoveContainer" containerID="c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f" Jan 13 20:56:49.282926 containerd[1454]: time="2025-01-13T20:56:49.282801530Z" level=info msg="RemoveContainer for \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\"" Jan 13 20:56:49.289780 containerd[1454]: time="2025-01-13T20:56:49.289724898Z" level=info msg="RemoveContainer for \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\" returns successfully" Jan 13 20:56:49.290086 kubelet[2639]: I0113 20:56:49.289932 2639 scope.go:117] "RemoveContainer" containerID="e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7" Jan 13 20:56:49.291554 containerd[1454]: time="2025-01-13T20:56:49.291507843Z" level=info msg="RemoveContainer for \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\"" Jan 13 20:56:49.296059 containerd[1454]: time="2025-01-13T20:56:49.296005383Z" level=info msg="RemoveContainer for \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\" returns successfully" Jan 13 20:56:49.297033 kubelet[2639]: I0113 20:56:49.296554 2639 scope.go:117] "RemoveContainer" containerID="27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee" Jan 13 20:56:49.298260 containerd[1454]: time="2025-01-13T20:56:49.298227211Z" level=info msg="RemoveContainer for \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\"" Jan 13 20:56:49.304719 containerd[1454]: time="2025-01-13T20:56:49.304591243Z" level=info msg="RemoveContainer for \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\" returns successfully" Jan 13 20:56:49.306888 kubelet[2639]: I0113 20:56:49.306852 2639 scope.go:117] "RemoveContainer" containerID="97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3" Jan 13 20:56:49.312171 containerd[1454]: time="2025-01-13T20:56:49.312138807Z" level=info msg="RemoveContainer for \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\"" Jan 13 20:56:49.317100 containerd[1454]: time="2025-01-13T20:56:49.317035371Z" level=info msg="RemoveContainer for \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\" returns successfully" Jan 13 20:56:49.317799 kubelet[2639]: I0113 20:56:49.317768 2639 scope.go:117] "RemoveContainer" containerID="1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d" Jan 13 20:56:49.318095 containerd[1454]: time="2025-01-13T20:56:49.318039166Z" level=error msg="ContainerStatus for \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\": not found" Jan 13 20:56:49.318296 kubelet[2639]: E0113 20:56:49.318248 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\": not found" containerID="1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d" Jan 13 20:56:49.318481 kubelet[2639]: I0113 20:56:49.318302 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d"} err="failed to get container status \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f83fbbad71b9e18470f2ee75e8c3731790d5c3f348f7e7a208fcaa0fec87a4d\": not found" Jan 13 20:56:49.318481 kubelet[2639]: I0113 20:56:49.318322 2639 scope.go:117] "RemoveContainer" containerID="c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f" Jan 13 20:56:49.318669 containerd[1454]: time="2025-01-13T20:56:49.318562084Z" level=error msg="ContainerStatus for \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\": not found" Jan 13 20:56:49.318780 kubelet[2639]: E0113 20:56:49.318753 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\": not found" containerID="c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f" Jan 13 20:56:49.318901 kubelet[2639]: I0113 20:56:49.318820 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f"} err="failed to get container status \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7f92f17adcc57b21b56dfc7c667480714e4277d605857b1ac2981a9b97ede9f\": not found" Jan 13 20:56:49.318901 kubelet[2639]: I0113 20:56:49.318841 2639 scope.go:117] "RemoveContainer" containerID="e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7" Jan 13 20:56:49.319081 containerd[1454]: time="2025-01-13T20:56:49.319041064Z" level=error msg="ContainerStatus for \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\": not found" Jan 13 20:56:49.319930 kubelet[2639]: E0113 20:56:49.319543 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\": not found" containerID="e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7" Jan 13 20:56:49.319930 kubelet[2639]: I0113 20:56:49.319596 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7"} err="failed to get container status \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8a6ed08a05b931e5b15a5336e6be6103e4de75038446c1a7a3055a716f388a7\": not found" Jan 13 20:56:49.319930 kubelet[2639]: I0113 20:56:49.319614 2639 scope.go:117] "RemoveContainer" containerID="27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee" Jan 13 20:56:49.320183 containerd[1454]: time="2025-01-13T20:56:49.319868323Z" level=error msg="ContainerStatus for \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\": not found" Jan 13 20:56:49.320328 kubelet[2639]: E0113 20:56:49.320082 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\": not found" containerID="27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee" Jan 13 20:56:49.320328 kubelet[2639]: I0113 20:56:49.320132 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee"} err="failed to get container status \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\": rpc error: code = NotFound desc = an error occurred when try to find container \"27a95b3b529dc6d93268a8980c47d6f781c5b36717567bba230750963489cdee\": not found" Jan 13 20:56:49.320328 kubelet[2639]: I0113 20:56:49.320151 2639 scope.go:117] "RemoveContainer" containerID="97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3" Jan 13 20:56:49.320874 kubelet[2639]: E0113 20:56:49.320712 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\": not found" containerID="97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3" Jan 13 20:56:49.320874 kubelet[2639]: I0113 20:56:49.320749 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3"} err="failed to get container status \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\": not found" Jan 13 20:56:49.321150 containerd[1454]: time="2025-01-13T20:56:49.320546717Z" level=error msg="ContainerStatus for \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97f38c2b9c1ab7174dfd1a195115a5237813513cc8da986dfeaedb564a0859e3\": not found" Jan 13 20:56:49.540164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503-rootfs.mount: Deactivated successfully. Jan 13 20:56:49.540351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503-shm.mount: Deactivated successfully. Jan 13 20:56:49.540524 systemd[1]: var-lib-kubelet-pods-0aaf1327\x2d70f4\x2d4727\x2da2d4\x2dff0db35bb2ae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:56:49.540634 systemd[1]: var-lib-kubelet-pods-0aaf1327\x2d70f4\x2d4727\x2da2d4\x2dff0db35bb2ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tx2w.mount: Deactivated successfully. Jan 13 20:56:49.540744 systemd[1]: var-lib-kubelet-pods-0aaf1327\x2d70f4\x2d4727\x2da2d4\x2dff0db35bb2ae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:56:49.540879 systemd[1]: var-lib-kubelet-pods-8c1bab34\x2d5e34\x2d4a23\x2d82fa\x2d6cc20c2b86fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz7tbm.mount: Deactivated successfully. Jan 13 20:56:49.848874 kubelet[2639]: I0113 20:56:49.848821 2639 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" path="/var/lib/kubelet/pods/0aaf1327-70f4-4727-a2d4-ff0db35bb2ae/volumes" Jan 13 20:56:49.849838 kubelet[2639]: I0113 20:56:49.849790 2639 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8c1bab34-5e34-4a23-82fa-6cc20c2b86fc" path="/var/lib/kubelet/pods/8c1bab34-5e34-4a23-82fa-6cc20c2b86fc/volumes" Jan 13 20:56:50.497665 sshd[4273]: Connection closed by 147.75.109.163 port 59220 Jan 13 20:56:50.498312 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:50.504092 systemd[1]: sshd@25-10.128.0.13:22-147.75.109.163:59220.service: Deactivated successfully. Jan 13 20:56:50.507493 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:56:50.507952 systemd[1]: session-26.scope: Consumed 1.616s CPU time. Jan 13 20:56:50.511186 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:56:50.513700 systemd-logind[1441]: Removed session 26. Jan 13 20:56:50.554816 systemd[1]: Started sshd@26-10.128.0.13:22-147.75.109.163:44368.service - OpenSSH per-connection server daemon (147.75.109.163:44368). Jan 13 20:56:50.844958 sshd[4435]: Accepted publickey for core from 147.75.109.163 port 44368 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:50.846739 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:50.852939 systemd-logind[1441]: New session 27 of user core. Jan 13 20:56:50.863628 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:56:51.420231 ntpd[1423]: Deleting interface #11 lxc_health, fe80::6435:3cff:fe77:e29c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jan 13 20:56:51.420952 ntpd[1423]: 13 Jan 20:56:51 ntpd[1423]: Deleting interface #11 lxc_health, fe80::6435:3cff:fe77:e29c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jan 13 20:56:51.832769 containerd[1454]: time="2025-01-13T20:56:51.832702822Z" level=info msg="StopPodSandbox for \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\"" Jan 13 20:56:51.834036 containerd[1454]: time="2025-01-13T20:56:51.833017959Z" level=info msg="TearDown network for sandbox \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" successfully" Jan 13 20:56:51.834036 containerd[1454]: time="2025-01-13T20:56:51.833613498Z" level=info msg="StopPodSandbox for \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" returns successfully" Jan 13 20:56:51.835014 containerd[1454]: time="2025-01-13T20:56:51.834777461Z" level=info msg="RemovePodSandbox for \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\"" Jan 13 20:56:51.835014 containerd[1454]: time="2025-01-13T20:56:51.834832865Z" level=info msg="Forcibly stopping sandbox \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\"" Jan 13 20:56:51.835014 containerd[1454]: time="2025-01-13T20:56:51.834931153Z" level=info msg="TearDown network for sandbox \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" successfully" Jan 13 20:56:51.839975 containerd[1454]: time="2025-01-13T20:56:51.839556070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:56:51.839975 containerd[1454]: time="2025-01-13T20:56:51.839622164Z" level=info msg="RemovePodSandbox \"5b47e72d7864e743fcc206dfba729463d445a0f23ce206a08b90a0d1b3b78316\" returns successfully" Jan 13 20:56:51.840466 containerd[1454]: time="2025-01-13T20:56:51.840304913Z" level=info msg="StopPodSandbox for \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\"" Jan 13 20:56:51.840776 containerd[1454]: time="2025-01-13T20:56:51.840400672Z" level=info msg="TearDown network for sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" successfully" Jan 13 20:56:51.840776 containerd[1454]: time="2025-01-13T20:56:51.840669135Z" level=info msg="StopPodSandbox for \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" returns successfully" Jan 13 20:56:51.842380 containerd[1454]: time="2025-01-13T20:56:51.841196058Z" level=info msg="RemovePodSandbox for \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\"" Jan 13 20:56:51.842380 containerd[1454]: time="2025-01-13T20:56:51.841311120Z" level=info msg="Forcibly stopping sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\"" Jan 13 20:56:51.842380 containerd[1454]: time="2025-01-13T20:56:51.841389246Z" level=info msg="TearDown network for sandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" successfully" Jan 13 20:56:51.847908 containerd[1454]: time="2025-01-13T20:56:51.847823491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:56:51.848117 containerd[1454]: time="2025-01-13T20:56:51.848086013Z" level=info msg="RemovePodSandbox \"439a137ff0a0fa97b1aceb4575642d422ede9f4dc1f6a68692efa555c5823503\" returns successfully" Jan 13 20:56:52.013284 kubelet[2639]: I0113 20:56:52.013231 2639 topology_manager.go:215] "Topology Admit Handler" podUID="89d60a2a-ea09-4113-86cb-40755071dabb" podNamespace="kube-system" podName="cilium-njn9g" Jan 13 20:56:52.013911 kubelet[2639]: E0113 20:56:52.013320 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" containerName="mount-cgroup" Jan 13 20:56:52.013911 kubelet[2639]: E0113 20:56:52.013336 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" containerName="apply-sysctl-overwrites" Jan 13 20:56:52.013911 kubelet[2639]: E0113 20:56:52.013347 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" containerName="mount-bpf-fs" Jan 13 20:56:52.013911 kubelet[2639]: E0113 20:56:52.013360 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" containerName="clean-cilium-state" Jan 13 20:56:52.013911 kubelet[2639]: E0113 20:56:52.013381 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" containerName="cilium-agent" Jan 13 20:56:52.013911 kubelet[2639]: E0113 20:56:52.013395 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c1bab34-5e34-4a23-82fa-6cc20c2b86fc" containerName="cilium-operator" Jan 13 20:56:52.013911 kubelet[2639]: I0113 20:56:52.013459 2639 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1bab34-5e34-4a23-82fa-6cc20c2b86fc" containerName="cilium-operator" Jan 13 20:56:52.013911 kubelet[2639]: I0113 20:56:52.013474 2639 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aaf1327-70f4-4727-a2d4-ff0db35bb2ae" containerName="cilium-agent" Jan 13 20:56:52.029147 sshd[4437]: Connection closed by 147.75.109.163 port 44368 Jan 13 20:56:52.029695 systemd[1]: Created slice kubepods-burstable-pod89d60a2a_ea09_4113_86cb_40755071dabb.slice - libcontainer container kubepods-burstable-pod89d60a2a_ea09_4113_86cb_40755071dabb.slice. Jan 13 20:56:52.031300 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:52.041878 systemd[1]: sshd@26-10.128.0.13:22-147.75.109.163:44368.service: Deactivated successfully. Jan 13 20:56:52.046769 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:56:52.051728 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:56:52.054805 systemd-logind[1441]: Removed session 27. Jan 13 20:56:52.065598 kubelet[2639]: E0113 20:56:52.065538 2639 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:56:52.094856 systemd[1]: Started sshd@27-10.128.0.13:22-147.75.109.163:44384.service - OpenSSH per-connection server daemon (147.75.109.163:44384). Jan 13 20:56:52.101023 kubelet[2639]: I0113 20:56:52.100168 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-hostproc\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101023 kubelet[2639]: I0113 20:56:52.100230 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89d60a2a-ea09-4113-86cb-40755071dabb-cilium-ipsec-secrets\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101023 kubelet[2639]: I0113 20:56:52.100266 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-host-proc-sys-net\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101023 kubelet[2639]: I0113 20:56:52.100303 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqh4n\" (UniqueName: \"kubernetes.io/projected/89d60a2a-ea09-4113-86cb-40755071dabb-kube-api-access-mqh4n\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101023 kubelet[2639]: I0113 20:56:52.100342 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-cilium-cgroup\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101023 kubelet[2639]: I0113 20:56:52.100377 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-cni-path\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101401 kubelet[2639]: I0113 20:56:52.100430 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-cilium-run\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101401 kubelet[2639]: I0113 20:56:52.100471 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89d60a2a-ea09-4113-86cb-40755071dabb-cilium-config-path\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101401 kubelet[2639]: I0113 20:56:52.100508 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-etc-cni-netd\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101401 kubelet[2639]: I0113 20:56:52.100548 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-xtables-lock\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101401 kubelet[2639]: I0113 20:56:52.100581 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-host-proc-sys-kernel\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101401 kubelet[2639]: I0113 20:56:52.100625 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89d60a2a-ea09-4113-86cb-40755071dabb-hubble-tls\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101742 kubelet[2639]: I0113 20:56:52.100659 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-bpf-maps\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101742 kubelet[2639]: I0113 20:56:52.100690 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89d60a2a-ea09-4113-86cb-40755071dabb-lib-modules\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.101742 kubelet[2639]: I0113 20:56:52.100724 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89d60a2a-ea09-4113-86cb-40755071dabb-clustermesh-secrets\") pod \"cilium-njn9g\" (UID: \"89d60a2a-ea09-4113-86cb-40755071dabb\") " pod="kube-system/cilium-njn9g" Jan 13 20:56:52.334941 containerd[1454]: time="2025-01-13T20:56:52.334884475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njn9g,Uid:89d60a2a-ea09-4113-86cb-40755071dabb,Namespace:kube-system,Attempt:0,}" Jan 13 20:56:52.372340 containerd[1454]: time="2025-01-13T20:56:52.371768104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:56:52.372340 containerd[1454]: time="2025-01-13T20:56:52.371831857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:56:52.372340 containerd[1454]: time="2025-01-13T20:56:52.371850606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:52.372340 containerd[1454]: time="2025-01-13T20:56:52.371949457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:56:52.407655 systemd[1]: Started cri-containerd-70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd.scope - libcontainer container 70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd. Jan 13 20:56:52.414266 sshd[4449]: Accepted publickey for core from 147.75.109.163 port 44384 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:52.417873 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:52.428065 systemd-logind[1441]: New session 28 of user core. Jan 13 20:56:52.436034 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:56:52.446755 kubelet[2639]: E0113 20:56:52.446565 2639 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89d60a2a_ea09_4113_86cb_40755071dabb.slice/cri-containerd-70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd.scope\": RecentStats: unable to find data in memory cache]" Jan 13 20:56:52.458357 containerd[1454]: time="2025-01-13T20:56:52.458304109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njn9g,Uid:89d60a2a-ea09-4113-86cb-40755071dabb,Namespace:kube-system,Attempt:0,} returns sandbox id \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\"" Jan 13 20:56:52.465178 containerd[1454]: time="2025-01-13T20:56:52.465119247Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:56:52.482112 containerd[1454]: time="2025-01-13T20:56:52.482051226Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd\"" Jan 13 20:56:52.482742 containerd[1454]: time="2025-01-13T20:56:52.482667909Z" level=info msg="StartContainer for \"541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd\"" Jan 13 20:56:52.517646 systemd[1]: Started cri-containerd-541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd.scope - libcontainer container 541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd. Jan 13 20:56:52.555162 containerd[1454]: time="2025-01-13T20:56:52.555089332Z" level=info msg="StartContainer for \"541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd\" returns successfully" Jan 13 20:56:52.567501 systemd[1]: cri-containerd-541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd.scope: Deactivated successfully. Jan 13 20:56:52.607773 containerd[1454]: time="2025-01-13T20:56:52.607682144Z" level=info msg="shim disconnected" id=541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd namespace=k8s.io Jan 13 20:56:52.607773 containerd[1454]: time="2025-01-13T20:56:52.607769602Z" level=warning msg="cleaning up after shim disconnected" id=541c0b98119f1b0d572a4e8ab2b41d44cde692608b59956d42b5687a314174bd namespace=k8s.io Jan 13 20:56:52.608133 containerd[1454]: time="2025-01-13T20:56:52.607783684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:52.628507 sshd[4491]: Connection closed by 147.75.109.163 port 44384 Jan 13 20:56:52.629042 containerd[1454]: time="2025-01-13T20:56:52.626850416Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:56:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:56:52.629511 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Jan 13 20:56:52.635674 systemd[1]: sshd@27-10.128.0.13:22-147.75.109.163:44384.service: Deactivated successfully. Jan 13 20:56:52.638917 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:56:52.640171 systemd-logind[1441]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:56:52.641758 systemd-logind[1441]: Removed session 28. Jan 13 20:56:52.689829 systemd[1]: Started sshd@28-10.128.0.13:22-147.75.109.163:44392.service - OpenSSH per-connection server daemon (147.75.109.163:44392). Jan 13 20:56:52.981900 sshd[4564]: Accepted publickey for core from 147.75.109.163 port 44392 ssh2: RSA SHA256:7KTlWmFJHQ3D0vLlDlyP2trKpfMpba1Zwqp6TqynAtY Jan 13 20:56:52.983103 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:56:52.989782 systemd-logind[1441]: New session 29 of user core. Jan 13 20:56:52.994626 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:56:53.283088 containerd[1454]: time="2025-01-13T20:56:53.282780066Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:56:53.306579 containerd[1454]: time="2025-01-13T20:56:53.304085272Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1\"" Jan 13 20:56:53.309310 containerd[1454]: time="2025-01-13T20:56:53.309215299Z" level=info msg="StartContainer for \"ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1\"" Jan 13 20:56:53.312066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744173843.mount: Deactivated successfully. Jan 13 20:56:53.362634 systemd[1]: Started cri-containerd-ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1.scope - libcontainer container ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1. Jan 13 20:56:53.399031 containerd[1454]: time="2025-01-13T20:56:53.398972610Z" level=info msg="StartContainer for \"ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1\" returns successfully" Jan 13 20:56:53.406812 systemd[1]: cri-containerd-ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1.scope: Deactivated successfully. Jan 13 20:56:53.446211 containerd[1454]: time="2025-01-13T20:56:53.446090527Z" level=info msg="shim disconnected" id=ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1 namespace=k8s.io Jan 13 20:56:53.446211 containerd[1454]: time="2025-01-13T20:56:53.446210200Z" level=warning msg="cleaning up after shim disconnected" id=ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1 namespace=k8s.io Jan 13 20:56:53.446610 containerd[1454]: time="2025-01-13T20:56:53.446226933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:54.213277 systemd[1]: run-containerd-runc-k8s.io-ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1-runc.iGUba4.mount: Deactivated successfully. Jan 13 20:56:54.213446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff9b3995a60edd9628090565846b979d74fd871d9e3b78d108e5990385fdc6d1-rootfs.mount: Deactivated successfully. Jan 13 20:56:54.252432 kubelet[2639]: I0113 20:56:54.251987 2639 setters.go:568] "Node became not ready" node="ci-4152-2-0-37573f4bda6f8d98a6ad.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:56:54Z","lastTransitionTime":"2025-01-13T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:56:54.289903 containerd[1454]: time="2025-01-13T20:56:54.289849413Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:56:54.327076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592700974.mount: Deactivated successfully. Jan 13 20:56:54.328567 containerd[1454]: time="2025-01-13T20:56:54.328267193Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277\"" Jan 13 20:56:54.340444 containerd[1454]: time="2025-01-13T20:56:54.339426852Z" level=info msg="StartContainer for \"66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277\"" Jan 13 20:56:54.390835 systemd[1]: Started cri-containerd-66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277.scope - libcontainer container 66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277. Jan 13 20:56:54.438190 containerd[1454]: time="2025-01-13T20:56:54.438136473Z" level=info msg="StartContainer for \"66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277\" returns successfully" Jan 13 20:56:54.443029 systemd[1]: cri-containerd-66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277.scope: Deactivated successfully. Jan 13 20:56:54.474744 containerd[1454]: time="2025-01-13T20:56:54.474580262Z" level=info msg="shim disconnected" id=66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277 namespace=k8s.io Jan 13 20:56:54.474744 containerd[1454]: time="2025-01-13T20:56:54.474652663Z" level=warning msg="cleaning up after shim disconnected" id=66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277 namespace=k8s.io Jan 13 20:56:54.474744 containerd[1454]: time="2025-01-13T20:56:54.474666658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:55.213234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66abd9186dc0b778dde3db7ded5eff06f704813fa1029dad53ea46dff46ed277-rootfs.mount: Deactivated successfully. Jan 13 20:56:55.292096 containerd[1454]: time="2025-01-13T20:56:55.292026346Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:56:55.314062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230268323.mount: Deactivated successfully. Jan 13 20:56:55.316061 containerd[1454]: time="2025-01-13T20:56:55.315750334Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594\"" Jan 13 20:56:55.318534 containerd[1454]: time="2025-01-13T20:56:55.317240138Z" level=info msg="StartContainer for \"d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594\"" Jan 13 20:56:55.367614 systemd[1]: Started cri-containerd-d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594.scope - libcontainer container d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594. Jan 13 20:56:55.401123 systemd[1]: cri-containerd-d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594.scope: Deactivated successfully. Jan 13 20:56:55.404102 containerd[1454]: time="2025-01-13T20:56:55.404053326Z" level=info msg="StartContainer for \"d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594\" returns successfully" Jan 13 20:56:55.433051 containerd[1454]: time="2025-01-13T20:56:55.432921256Z" level=info msg="shim disconnected" id=d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594 namespace=k8s.io Jan 13 20:56:55.433051 containerd[1454]: time="2025-01-13T20:56:55.433048277Z" level=warning msg="cleaning up after shim disconnected" id=d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594 namespace=k8s.io Jan 13 20:56:55.433051 containerd[1454]: time="2025-01-13T20:56:55.433064367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:56:56.213731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d23067d3a64906f31de910edeee41e779ecfd1acb25faffe6c63b6ceb814e594-rootfs.mount: Deactivated successfully. Jan 13 20:56:56.298716 containerd[1454]: time="2025-01-13T20:56:56.298516697Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:56:56.321338 containerd[1454]: time="2025-01-13T20:56:56.321274892Z" level=info msg="CreateContainer within sandbox \"70f40ad6a9dfb7d6058542fa99bbb3cd4c5cf5d6ec49d9aa8252670b09ee43dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129\"" Jan 13 20:56:56.323455 containerd[1454]: time="2025-01-13T20:56:56.322177444Z" level=info msg="StartContainer for \"d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129\"" Jan 13 20:56:56.366374 systemd[1]: run-containerd-runc-k8s.io-d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129-runc.oipqFs.mount: Deactivated successfully. Jan 13 20:56:56.375594 systemd[1]: Started cri-containerd-d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129.scope - libcontainer container d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129. Jan 13 20:56:56.418455 containerd[1454]: time="2025-01-13T20:56:56.415463405Z" level=info msg="StartContainer for \"d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129\" returns successfully" Jan 13 20:56:56.845363 kubelet[2639]: E0113 20:56:56.844946 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-jdkrr" podUID="ff059bf6-850c-48a4-acc6-10a7bbb0a30b" Jan 13 20:56:56.871543 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:57:00.141973 systemd-networkd[1366]: lxc_health: Link UP Jan 13 20:57:00.150110 systemd-networkd[1366]: lxc_health: Gained carrier Jan 13 20:57:00.372863 kubelet[2639]: I0113 20:57:00.372432 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-njn9g" podStartSLOduration=9.372344849 podStartE2EDuration="9.372344849s" podCreationTimestamp="2025-01-13 20:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:56:57.353476034 +0000 UTC m=+125.683985606" watchObservedRunningTime="2025-01-13 20:57:00.372344849 +0000 UTC m=+128.702854422" Jan 13 20:57:01.429843 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 13 20:57:01.687604 systemd[1]: run-containerd-runc-k8s.io-d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129-runc.IPYOS6.mount: Deactivated successfully. Jan 13 20:57:03.975209 systemd[1]: run-containerd-runc-k8s.io-d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129-runc.v29TIK.mount: Deactivated successfully. Jan 13 20:57:04.420310 ntpd[1423]: Listen normally on 14 lxc_health [fe80::4ccd:a1ff:fe73:9eb2%14]:123 Jan 13 20:57:04.421018 ntpd[1423]: 13 Jan 20:57:04 ntpd[1423]: Listen normally on 14 lxc_health [fe80::4ccd:a1ff:fe73:9eb2%14]:123 Jan 13 20:57:06.209871 systemd[1]: run-containerd-runc-k8s.io-d40f89b4600bb82b01160afea0f4b60cffb38ffc5b0b3a020cb4b7a5d2e0a129-runc.l8Krfi.mount: Deactivated successfully. Jan 13 20:57:06.390937 sshd[4566]: Connection closed by 147.75.109.163 port 44392 Jan 13 20:57:06.392018 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Jan 13 20:57:06.397443 systemd[1]: sshd@28-10.128.0.13:22-147.75.109.163:44392.service: Deactivated successfully. Jan 13 20:57:06.400401 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:57:06.402681 systemd-logind[1441]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:57:06.404382 systemd-logind[1441]: Removed session 29.