Dec 13 01:16:18.098642 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:16:18.098685 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:18.098705 kernel: BIOS-provided physical RAM map: Dec 13 01:16:18.098719 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:16:18.098733 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:16:18.098747 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:16:18.098764 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:16:18.099015 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:16:18.099032 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:16:18.099047 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:16:18.099062 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:16:18.099077 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:16:18.099092 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:16:18.099107 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:16:18.099131 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:16:18.099148 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:16:18.099164 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:16:18.099180 kernel: NX (Execute Disable) protection: active Dec 13 01:16:18.099196 kernel: APIC: Static calls initialized Dec 13 01:16:18.099213 kernel: efi: EFI v2.7 by EDK II Dec 13 01:16:18.099229 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:16:18.099245 kernel: SMBIOS 2.4 present. Dec 13 01:16:18.099262 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:16:18.099278 kernel: Hypervisor detected: KVM Dec 13 01:16:18.099298 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:16:18.099314 kernel: kvm-clock: using sched offset of 11971977379 cycles Dec 13 01:16:18.099331 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:16:18.099348 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:16:18.099364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:16:18.099381 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:16:18.099397 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:16:18.099414 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:16:18.099430 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:16:18.099450 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:16:18.099466 kernel: Using GB pages for direct mapping Dec 13 01:16:18.099482 kernel: Secure boot disabled Dec 13 01:16:18.099499 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:18.099515 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:16:18.099532 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:16:18.099549 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:16:18.099580 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:16:18.099601 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:16:18.099619 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:16:18.099637 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:16:18.099654 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:16:18.099672 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:16:18.099690 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:16:18.099710 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:16:18.099728 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:16:18.099746 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:16:18.099763 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:16:18.099798 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:16:18.099815 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:16:18.099833 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:16:18.099851 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:16:18.099868 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:16:18.099890 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:16:18.099908 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:16:18.099925 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:16:18.099942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:16:18.099960 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:16:18.099983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:16:18.100001 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:16:18.100019 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:16:18.100036 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:16:18.100058 kernel: Zone ranges: Dec 13 01:16:18.100076 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:16:18.100093 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:16:18.100111 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:18.100128 kernel: Movable zone start for each node Dec 13 01:16:18.100146 kernel: Early memory node ranges Dec 13 01:16:18.100163 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:16:18.100181 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:16:18.100198 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:16:18.100219 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:16:18.100236 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:18.100254 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:16:18.100271 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:16:18.100289 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:16:18.100306 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:16:18.100324 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:16:18.100341 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:16:18.100359 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:16:18.100380 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:16:18.100397 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:16:18.100415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:16:18.100432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:16:18.100450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:16:18.100467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:16:18.100485 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:16:18.100503 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:16:18.100520 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:16:18.100541 kernel: Booting paravirtualized kernel on KVM Dec 13 01:16:18.100559 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:16:18.100583 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:16:18.100600 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:16:18.100618 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:16:18.100635 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:16:18.100652 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:16:18.100670 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:16:18.100689 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:18.100711 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:18.100728 kernel: random: crng init done Dec 13 01:16:18.100746 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:16:18.100764 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:18.100803 kernel: Fallback order for Node 0: 0 Dec 13 01:16:18.100820 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:16:18.100839 kernel: Policy zone: Normal Dec 13 01:16:18.100856 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:18.100878 kernel: software IO TLB: area num 2. Dec 13 01:16:18.100896 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Dec 13 01:16:18.100914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:16:18.100932 kernel: Kernel/User page tables isolation: enabled Dec 13 01:16:18.100949 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:16:18.100967 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:16:18.100985 kernel: Dynamic Preempt: voluntary Dec 13 01:16:18.101002 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:18.101021 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:18.101055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:16:18.101074 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:18.101093 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:16:18.101115 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:18.101133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:18.101152 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:16:18.101170 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:16:18.101188 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:18.101207 kernel: Console: colour dummy device 80x25 Dec 13 01:16:18.101229 kernel: printk: console [ttyS0] enabled Dec 13 01:16:18.101247 kernel: ACPI: Core revision 20230628 Dec 13 01:16:18.101266 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:16:18.101284 kernel: x2apic enabled Dec 13 01:16:18.101303 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:16:18.101321 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:16:18.101340 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:18.101359 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:16:18.101381 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:16:18.101400 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:16:18.101418 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:16:18.101437 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:16:18.101455 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:16:18.101474 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:16:18.101492 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:16:18.101511 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:16:18.101530 kernel: RETBleed: Mitigation: IBRS Dec 13 01:16:18.101552 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:16:18.101576 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:16:18.101595 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:16:18.101613 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:16:18.101632 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:16:18.101650 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:16:18.101668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:16:18.101687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:16:18.101706 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:16:18.101728 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:16:18.101747 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:16:18.101766 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:18.101797 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:18.101816 kernel: landlock: Up and running. Dec 13 01:16:18.101834 kernel: SELinux: Initializing. Dec 13 01:16:18.101852 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.101871 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.101890 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:16:18.101914 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:18.101932 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:18.101951 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:18.101970 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:16:18.101988 kernel: signal: max sigframe size: 1776 Dec 13 01:16:18.102007 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:18.102025 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:18.102044 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:16:18.102062 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:18.102084 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:16:18.102103 kernel: .... node #0, CPUs: #1 Dec 13 01:16:18.102122 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:16:18.102141 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:16:18.102160 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:16:18.102179 kernel: smpboot: Max logical packages: 1 Dec 13 01:16:18.102197 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:16:18.102216 kernel: devtmpfs: initialized Dec 13 01:16:18.102238 kernel: x86/mm: Memory block size: 128MB Dec 13 01:16:18.102256 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:16:18.102275 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:18.102294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:16:18.102313 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:18.102331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:18.102350 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:18.102368 kernel: audit: type=2000 audit(1734052577.172:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:18.102386 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:18.102409 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:16:18.102427 kernel: cpuidle: using governor menu Dec 13 01:16:18.102451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:18.103935 kernel: dca service started, version 1.12.1 Dec 13 01:16:18.103955 kernel: PCI: Using configuration type 1 for base access Dec 13 01:16:18.103973 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:16:18.103991 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:18.104009 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:18.104027 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:18.104052 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:18.104071 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:18.104091 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:18.104111 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:18.104131 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:18.104151 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:16:18.104171 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:16:18.104191 kernel: ACPI: Interpreter enabled Dec 13 01:16:18.104211 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:16:18.104235 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:16:18.104256 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:16:18.104276 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:16:18.104296 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:16:18.104316 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:18.104562 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:18.104763 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:16:18.106280 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:16:18.106437 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:18.106882 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:16:18.107175 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:16:18.107384 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:16:18.107565 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:18.107753 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:18.108061 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:16:18.108284 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:16:18.108497 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:16:18.108697 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:16:18.108944 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:16:18.109152 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:16:18.109362 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:16:18.109574 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:16:18.111879 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:16:18.112117 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:16:18.112338 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:18.112544 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:16:18.112735 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:16:18.112766 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:16:18.112804 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:16:18.112828 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:16:18.112845 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:16:18.112862 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:16:18.112880 kernel: iommu: Default domain type: Translated Dec 13 01:16:18.112900 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:16:18.112919 kernel: efivars: Registered efivars operations Dec 13 01:16:18.112939 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:16:18.112963 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:16:18.112982 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:16:18.113000 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:16:18.113017 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:16:18.113034 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:16:18.113052 kernel: vgaarb: loaded Dec 13 01:16:18.113070 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:16:18.113088 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:18.113107 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:18.113131 kernel: pnp: PnP ACPI init Dec 13 01:16:18.113151 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:16:18.113170 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:16:18.113189 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:18.113208 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:16:18.113226 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:16:18.113245 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:18.113264 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:18.113282 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:16:18.113304 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:16:18.113324 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.113343 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.113363 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:18.113382 kernel: NET: Registered PF_XDP protocol family Dec 13 01:16:18.113580 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:16:18.113754 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:16:18.115878 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:16:18.116059 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:18.116251 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:16:18.116278 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:18.116299 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:16:18.116319 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:16:18.116339 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:16:18.116358 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:18.116378 kernel: clocksource: Switched to clocksource tsc Dec 13 01:16:18.116403 kernel: Initialise system trusted keyrings Dec 13 01:16:18.116423 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:16:18.116443 kernel: Key type asymmetric registered Dec 13 01:16:18.116462 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:18.116481 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:16:18.116501 kernel: io scheduler mq-deadline registered Dec 13 01:16:18.116522 kernel: io scheduler kyber registered Dec 13 01:16:18.116541 kernel: io scheduler bfq registered Dec 13 01:16:18.116561 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:16:18.116594 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:16:18.118140 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:16:18.118177 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:16:18.118389 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:16:18.118416 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:16:18.118615 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:16:18.118642 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:18.118663 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:16:18.118683 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:16:18.118709 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:16:18.118727 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:16:18.120017 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:16:18.120052 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:16:18.120072 kernel: i8042: Warning: Keylock active Dec 13 01:16:18.120092 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:16:18.120112 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:16:18.120303 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:16:18.120484 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:16:18.120663 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:16:17 UTC (1734052577) Dec 13 01:16:18.121886 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:16:18.121918 kernel: intel_pstate: CPU model not supported Dec 13 01:16:18.121939 kernel: pstore: Using crash dump compression: deflate Dec 13 01:16:18.121959 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:16:18.121979 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:18.121998 kernel: Segment Routing with IPv6 Dec 13 01:16:18.122023 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:18.122043 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:18.122063 kernel: Key type dns_resolver registered Dec 13 01:16:18.122082 kernel: IPI shorthand broadcast: enabled Dec 13 01:16:18.122103 kernel: sched_clock: Marking stable (835004162, 136836632)->(1006417082, -34576288) Dec 13 01:16:18.122123 kernel: registered taskstats version 1 Dec 13 01:16:18.122142 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:18.122162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:16:18.122181 kernel: Key type .fscrypt registered Dec 13 01:16:18.122204 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:18.122224 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:18.122244 kernel: ima: No architecture policies found Dec 13 01:16:18.122264 kernel: clk: Disabling unused clocks Dec 13 01:16:18.122283 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:16:18.122303 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:16:18.122323 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:16:18.122343 kernel: Run /init as init process Dec 13 01:16:18.122366 kernel: with arguments: Dec 13 01:16:18.122386 kernel: /init Dec 13 01:16:18.122405 kernel: with environment: Dec 13 01:16:18.122424 kernel: HOME=/ Dec 13 01:16:18.122443 kernel: TERM=linux Dec 13 01:16:18.122463 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:18.122483 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:16:18.122506 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:18.122533 systemd[1]: Detected virtualization google. Dec 13 01:16:18.122554 systemd[1]: Detected architecture x86-64. Dec 13 01:16:18.122582 systemd[1]: Running in initrd. Dec 13 01:16:18.122603 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:18.122622 systemd[1]: Hostname set to . Dec 13 01:16:18.122644 systemd[1]: Initializing machine ID from random generator. Dec 13 01:16:18.122665 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:18.122686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:18.122710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:18.122732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:18.122753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:18.124797 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:18.124833 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:18.124855 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:18.124874 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:18.124903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:18.124922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:18.124960 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:18.124984 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:18.125005 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:18.125027 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:18.125051 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:18.125073 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:18.125094 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:18.125114 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:18.125135 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:18.125155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:18.125176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:18.125197 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:18.125222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:18.125243 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:18.125263 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:18.125284 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:18.125305 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:18.125326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:18.125346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:18.125366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:18.125423 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:16:18.125473 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:18.125494 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:18.125520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:18.125541 systemd-journald[183]: Journal started Dec 13 01:16:18.125589 systemd-journald[183]: Runtime Journal (/run/log/journal/482008b29c1741bba097202edd5cff71) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:16:18.110003 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:16:18.136798 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:18.141905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:18.151413 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:18.162927 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:18.162016 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:18.166917 kernel: Bridge firewalling registered Dec 13 01:16:18.165306 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:16:18.184582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:18.189034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:18.192062 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:18.206973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:18.211833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:18.226158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:18.230967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:18.236999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:18.241188 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:18.250595 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:18.281130 dracut-cmdline[218]: dracut-dracut-053 Dec 13 01:16:18.285761 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:18.291454 systemd-resolved[216]: Positive Trust Anchors: Dec 13 01:16:18.291466 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:18.291536 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:18.296200 systemd-resolved[216]: Defaulting to hostname 'linux'. Dec 13 01:16:18.297892 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:18.329013 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:18.383816 kernel: SCSI subsystem initialized Dec 13 01:16:18.394834 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:18.405823 kernel: iscsi: registered transport (tcp) Dec 13 01:16:18.429023 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:18.429106 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:18.479678 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:18.485007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:18.522919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:18.522997 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:18.523028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:18.567811 kernel: raid6: avx2x4 gen() 18292 MB/s Dec 13 01:16:18.584804 kernel: raid6: avx2x2 gen() 18213 MB/s Dec 13 01:16:18.602152 kernel: raid6: avx2x1 gen() 14103 MB/s Dec 13 01:16:18.602184 kernel: raid6: using algorithm avx2x4 gen() 18292 MB/s Dec 13 01:16:18.620184 kernel: raid6: .... xor() 7781 MB/s, rmw enabled Dec 13 01:16:18.620235 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:16:18.642809 kernel: xor: automatically using best checksumming function avx Dec 13 01:16:18.813823 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:18.826840 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:18.835981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:18.852984 systemd-udevd[400]: Using default interface naming scheme 'v255'. Dec 13 01:16:18.860042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:18.873994 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:18.906518 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 13 01:16:18.942058 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:18.949013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:19.038583 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:19.048964 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:19.089756 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:19.100559 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:19.108891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:19.116918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:19.130043 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:19.160004 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:16:19.164839 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:19.202799 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:16:19.202873 kernel: AES CTR mode by8 optimization enabled Dec 13 01:16:19.203811 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:16:19.211952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:19.258030 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:16:19.212173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:19.221715 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:19.245097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:19.245383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:19.249930 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:19.261759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:19.313572 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:16:19.328898 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:16:19.329161 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:16:19.329396 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:16:19.329626 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:16:19.329901 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:19.329930 kernel: GPT:17805311 != 25165823 Dec 13 01:16:19.329962 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:19.329986 kernel: GPT:17805311 != 25165823 Dec 13 01:16:19.330009 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:19.330033 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:19.330058 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:16:19.317098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:19.330684 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:19.365036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:19.390797 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Dec 13 01:16:19.390866 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (449) Dec 13 01:16:19.403325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:16:19.421417 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:16:19.428376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:16:19.433052 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:16:19.445329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:16:19.451164 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:19.474569 disk-uuid[549]: Primary Header is updated. Dec 13 01:16:19.474569 disk-uuid[549]: Secondary Entries is updated. Dec 13 01:16:19.474569 disk-uuid[549]: Secondary Header is updated. Dec 13 01:16:19.489817 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:19.512805 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:19.520819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:20.523601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:20.523679 disk-uuid[550]: The operation has completed successfully. Dec 13 01:16:20.602322 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:20.602465 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:20.630971 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:20.664005 sh[567]: Success Dec 13 01:16:20.688866 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:16:20.767879 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:20.774670 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:20.801270 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:20.842391 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:16:20.842480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:20.842523 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:20.851828 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:20.858652 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:20.896821 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:16:20.901729 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:20.902651 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:20.907977 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:20.921934 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:20.979787 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:20.979840 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:20.979868 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:20.997865 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:20.997943 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:21.011710 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:21.029932 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:21.038376 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:21.064034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:21.139704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:21.145060 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:21.264970 ignition[683]: Ignition 2.19.0 Dec 13 01:16:21.265410 ignition[683]: Stage: fetch-offline Dec 13 01:16:21.266753 systemd-networkd[750]: lo: Link UP Dec 13 01:16:21.265479 ignition[683]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.266760 systemd-networkd[750]: lo: Gained carrier Dec 13 01:16:21.265497 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.268273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:21.265680 ignition[683]: parsed url from cmdline: "" Dec 13 01:16:21.269154 systemd-networkd[750]: Enumeration completed Dec 13 01:16:21.265688 ignition[683]: no config URL provided Dec 13 01:16:21.269910 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:21.265698 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:21.269917 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:21.265713 ignition[683]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:21.271712 systemd-networkd[750]: eth0: Link UP Dec 13 01:16:21.265725 ignition[683]: failed to fetch config: resource requires networking Dec 13 01:16:21.271717 systemd-networkd[750]: eth0: Gained carrier Dec 13 01:16:21.266057 ignition[683]: Ignition finished successfully Dec 13 01:16:21.271726 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:21.357707 ignition[759]: Ignition 2.19.0 Dec 13 01:16:21.284847 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.87/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:16:21.357718 ignition[759]: Stage: fetch Dec 13 01:16:21.289168 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:21.357945 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.296260 systemd[1]: Reached target network.target - Network. Dec 13 01:16:21.357957 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.316977 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:16:21.358071 ignition[759]: parsed url from cmdline: "" Dec 13 01:16:21.368679 unknown[759]: fetched base config from "system" Dec 13 01:16:21.358078 ignition[759]: no config URL provided Dec 13 01:16:21.368692 unknown[759]: fetched base config from "system" Dec 13 01:16:21.358085 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:21.368709 unknown[759]: fetched user config from "gcp" Dec 13 01:16:21.358095 ignition[759]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:21.372509 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:16:21.358117 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:16:21.405085 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:21.361743 ignition[759]: GET result: OK Dec 13 01:16:21.454960 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:21.361865 ignition[759]: parsing config with SHA512: a2d42cf3ad98c8fe902dbe73e55c4250d5fd583d700dfa93a828bc9119716e66cea6b16b76484ac220f4b506f07a4ddad23ba05df0eae53272302fd7c008e9be Dec 13 01:16:21.477041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:21.370244 ignition[759]: fetch: fetch complete Dec 13 01:16:21.528589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:21.370251 ignition[759]: fetch: fetch passed Dec 13 01:16:21.530191 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:21.370323 ignition[759]: Ignition finished successfully Dec 13 01:16:21.559042 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:21.452210 ignition[765]: Ignition 2.19.0 Dec 13 01:16:21.567048 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:21.452219 ignition[765]: Stage: kargs Dec 13 01:16:21.585068 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:21.452441 ignition[765]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.602055 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:21.452453 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.634057 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:21.453759 ignition[765]: kargs: kargs passed Dec 13 01:16:21.453838 ignition[765]: Ignition finished successfully Dec 13 01:16:21.526265 ignition[771]: Ignition 2.19.0 Dec 13 01:16:21.526274 ignition[771]: Stage: disks Dec 13 01:16:21.526472 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.526485 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.527595 ignition[771]: disks: disks passed Dec 13 01:16:21.527653 ignition[771]: Ignition finished successfully Dec 13 01:16:21.686616 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:16:21.864438 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:21.892903 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:22.005855 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:22.006961 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:22.007769 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:22.030946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:22.059914 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:22.106062 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Dec 13 01:16:22.106099 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:22.106116 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:22.106130 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:22.097995 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:22.145015 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:22.145068 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:22.098076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:22.098123 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:22.130945 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:22.153320 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:22.184996 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:22.318981 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:22.329089 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:22.339904 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:22.350880 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:22.487454 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:22.490958 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:22.528816 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:22.538079 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:22.547996 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:22.592967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:22.602026 ignition[902]: INFO : Ignition 2.19.0 Dec 13 01:16:22.602026 ignition[902]: INFO : Stage: mount Dec 13 01:16:22.602026 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:22.602026 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:22.602026 ignition[902]: INFO : mount: mount passed Dec 13 01:16:22.602026 ignition[902]: INFO : Ignition finished successfully Dec 13 01:16:22.613196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:22.636947 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:22.680465 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:22.736823 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (914) Dec 13 01:16:22.754685 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:22.754786 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:22.754815 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:22.776159 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:22.776242 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:22.779419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:22.818385 ignition[931]: INFO : Ignition 2.19.0 Dec 13 01:16:22.818385 ignition[931]: INFO : Stage: files Dec 13 01:16:22.832924 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:22.832924 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:22.832924 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:22.832924 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:22.832335 unknown[931]: wrote ssh authorized keys file for user: core Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:22.996894 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:16:22.946927 systemd-networkd[750]: eth0: Gained IPv6LL Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:16:23.421173 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:16:23.807566 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.807566 ignition[931]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:23.846941 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:23.846941 ignition[931]: INFO : files: files passed Dec 13 01:16:23.846941 ignition[931]: INFO : Ignition finished successfully Dec 13 01:16:23.811853 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:23.843040 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:23.873005 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:23.926459 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:24.120930 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:24.120930 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:23.926763 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:24.187950 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:23.954337 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:23.982220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:24.006993 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:24.115443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:24.115562 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:24.132225 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:24.146142 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:24.178115 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:24.185963 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:24.235120 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:24.262999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:24.296329 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:24.316899 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:24.338980 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:24.357003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:24.357120 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:24.390935 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:24.408932 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:24.425930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:24.442921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:24.460898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:24.479914 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:24.496923 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:24.514938 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:24.532918 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:24.550966 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:24.567925 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:24.568045 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:24.598884 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:24.616906 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:24.634923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:24.635011 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:24.652938 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:24.653057 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:24.778956 ignition[983]: INFO : Ignition 2.19.0 Dec 13 01:16:24.778956 ignition[983]: INFO : Stage: umount Dec 13 01:16:24.778956 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:24.778956 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:24.778956 ignition[983]: INFO : umount: umount passed Dec 13 01:16:24.778956 ignition[983]: INFO : Ignition finished successfully Dec 13 01:16:24.681008 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:24.681183 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:24.702003 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:24.702093 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:24.729915 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:24.758914 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:24.759031 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:24.764925 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:24.786908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:24.787127 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:24.837131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:24.837217 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:24.865770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:24.866609 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:24.866717 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:24.886448 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:24.886561 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:24.905338 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:24.905446 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:24.915543 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:24.915656 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:24.930168 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:24.930234 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:24.947174 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:16:24.947239 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:16:24.982138 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:24.992070 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:24.992169 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:25.009178 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:25.027108 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:25.030894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:25.044157 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:25.062130 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:25.077153 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:25.077221 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:25.105095 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:25.105162 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:25.113131 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:25.113202 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:25.140169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:25.140252 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:25.148170 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:25.148241 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:25.165487 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:25.169848 systemd-networkd[750]: eth0: DHCPv6 lease lost Dec 13 01:16:25.192105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:25.201523 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:25.201651 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:25.229441 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:25.229758 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:25.237695 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:25.237747 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:25.260897 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:25.287874 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:25.288076 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:25.298146 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:25.298208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:25.327133 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:25.764894 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:25.327210 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:25.345116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:25.345191 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:25.366214 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:25.386228 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:25.386444 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:25.417318 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:25.417431 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:25.436658 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:25.436742 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:25.452969 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:25.453042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:25.470947 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:25.471043 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:25.498910 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:25.499016 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:25.525909 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:25.526022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:25.560998 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:25.565123 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:25.565201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:25.600136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:25.600199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:25.629566 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:25.629676 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:25.650651 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:25.681997 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:25.723984 systemd[1]: Switching root. Dec 13 01:16:26.040891 systemd-journald[183]: Journal stopped Dec 13 01:16:18.098642 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:16:18.098685 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:18.098705 kernel: BIOS-provided physical RAM map: Dec 13 01:16:18.098719 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:16:18.098733 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:16:18.098747 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:16:18.098764 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:16:18.099015 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:16:18.099032 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:16:18.099047 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:16:18.099062 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:16:18.099077 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:16:18.099092 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:16:18.099107 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:16:18.099131 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:16:18.099148 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:16:18.099164 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:16:18.099180 kernel: NX (Execute Disable) protection: active Dec 13 01:16:18.099196 kernel: APIC: Static calls initialized Dec 13 01:16:18.099213 kernel: efi: EFI v2.7 by EDK II Dec 13 01:16:18.099229 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:16:18.099245 kernel: SMBIOS 2.4 present. Dec 13 01:16:18.099262 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:16:18.099278 kernel: Hypervisor detected: KVM Dec 13 01:16:18.099298 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:16:18.099314 kernel: kvm-clock: using sched offset of 11971977379 cycles Dec 13 01:16:18.099331 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:16:18.099348 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:16:18.099364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:16:18.099381 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:16:18.099397 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:16:18.099414 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:16:18.099430 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:16:18.099450 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:16:18.099466 kernel: Using GB pages for direct mapping Dec 13 01:16:18.099482 kernel: Secure boot disabled Dec 13 01:16:18.099499 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:18.099515 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:16:18.099532 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:16:18.099549 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:16:18.099580 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:16:18.099601 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:16:18.099619 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:16:18.099637 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:16:18.099654 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:16:18.099672 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:16:18.099690 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:16:18.099710 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:16:18.099728 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:16:18.099746 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:16:18.099763 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:16:18.099798 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:16:18.099815 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:16:18.099833 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:16:18.099851 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:16:18.099868 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:16:18.099890 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:16:18.099908 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:16:18.099925 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:16:18.099942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:16:18.099960 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:16:18.099983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:16:18.100001 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:16:18.100019 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:16:18.100036 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:16:18.100058 kernel: Zone ranges: Dec 13 01:16:18.100076 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:16:18.100093 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:16:18.100111 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:18.100128 kernel: Movable zone start for each node Dec 13 01:16:18.100146 kernel: Early memory node ranges Dec 13 01:16:18.100163 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:16:18.100181 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:16:18.100198 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:16:18.100219 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:16:18.100236 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:18.100254 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:16:18.100271 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:16:18.100289 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:16:18.100306 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:16:18.100324 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:16:18.100341 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:16:18.100359 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:16:18.100380 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:16:18.100397 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:16:18.100415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:16:18.100432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:16:18.100450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:16:18.100467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:16:18.100485 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:16:18.100503 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:16:18.100520 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:16:18.100541 kernel: Booting paravirtualized kernel on KVM Dec 13 01:16:18.100559 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:16:18.100583 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:16:18.100600 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:16:18.100618 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:16:18.100635 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:16:18.100652 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:16:18.100670 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:16:18.100689 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:18.100711 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:18.100728 kernel: random: crng init done Dec 13 01:16:18.100746 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:16:18.100764 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:18.100803 kernel: Fallback order for Node 0: 0 Dec 13 01:16:18.100820 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:16:18.100839 kernel: Policy zone: Normal Dec 13 01:16:18.100856 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:18.100878 kernel: software IO TLB: area num 2. Dec 13 01:16:18.100896 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Dec 13 01:16:18.100914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:16:18.100932 kernel: Kernel/User page tables isolation: enabled Dec 13 01:16:18.100949 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:16:18.100967 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:16:18.100985 kernel: Dynamic Preempt: voluntary Dec 13 01:16:18.101002 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:18.101021 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:18.101055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:16:18.101074 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:18.101093 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:16:18.101115 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:18.101133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:18.101152 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:16:18.101170 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:16:18.101188 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:18.101207 kernel: Console: colour dummy device 80x25 Dec 13 01:16:18.101229 kernel: printk: console [ttyS0] enabled Dec 13 01:16:18.101247 kernel: ACPI: Core revision 20230628 Dec 13 01:16:18.101266 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:16:18.101284 kernel: x2apic enabled Dec 13 01:16:18.101303 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:16:18.101321 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:16:18.101340 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:18.101359 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:16:18.101381 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:16:18.101400 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:16:18.101418 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:16:18.101437 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:16:18.101455 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:16:18.101474 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:16:18.101492 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:16:18.101511 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:16:18.101530 kernel: RETBleed: Mitigation: IBRS Dec 13 01:16:18.101552 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:16:18.101576 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:16:18.101595 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:16:18.101613 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:16:18.101632 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:16:18.101650 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:16:18.101668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:16:18.101687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:16:18.101706 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:16:18.101728 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:16:18.101747 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:16:18.101766 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:18.101797 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:18.101816 kernel: landlock: Up and running. Dec 13 01:16:18.101834 kernel: SELinux: Initializing. Dec 13 01:16:18.101852 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.101871 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.101890 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:16:18.101914 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:18.101932 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:18.101951 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:18.101970 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:16:18.101988 kernel: signal: max sigframe size: 1776 Dec 13 01:16:18.102007 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:18.102025 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:18.102044 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:16:18.102062 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:18.102084 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:16:18.102103 kernel: .... node #0, CPUs: #1 Dec 13 01:16:18.102122 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:16:18.102141 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:16:18.102160 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:16:18.102179 kernel: smpboot: Max logical packages: 1 Dec 13 01:16:18.102197 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:16:18.102216 kernel: devtmpfs: initialized Dec 13 01:16:18.102238 kernel: x86/mm: Memory block size: 128MB Dec 13 01:16:18.102256 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:16:18.102275 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:18.102294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:16:18.102313 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:18.102331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:18.102350 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:18.102368 kernel: audit: type=2000 audit(1734052577.172:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:18.102386 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:18.102409 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:16:18.102427 kernel: cpuidle: using governor menu Dec 13 01:16:18.102451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:18.103935 kernel: dca service started, version 1.12.1 Dec 13 01:16:18.103955 kernel: PCI: Using configuration type 1 for base access Dec 13 01:16:18.103973 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:16:18.103991 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:18.104009 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:18.104027 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:18.104052 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:18.104071 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:18.104091 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:18.104111 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:18.104131 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:18.104151 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:16:18.104171 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:16:18.104191 kernel: ACPI: Interpreter enabled Dec 13 01:16:18.104211 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:16:18.104235 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:16:18.104256 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:16:18.104276 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:16:18.104296 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:16:18.104316 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:18.104562 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:18.104763 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:16:18.106280 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:16:18.106437 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:18.106882 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:16:18.107175 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:16:18.107384 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:16:18.107565 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:18.107753 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:18.108061 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:16:18.108284 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:16:18.108497 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:16:18.108697 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:16:18.108944 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:16:18.109152 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:16:18.109362 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:16:18.109574 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:16:18.111879 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:16:18.112117 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:16:18.112338 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:18.112544 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:16:18.112735 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:16:18.112766 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:16:18.112804 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:16:18.112828 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:16:18.112845 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:16:18.112862 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:16:18.112880 kernel: iommu: Default domain type: Translated Dec 13 01:16:18.112900 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:16:18.112919 kernel: efivars: Registered efivars operations Dec 13 01:16:18.112939 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:16:18.112963 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:16:18.112982 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:16:18.113000 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:16:18.113017 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:16:18.113034 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:16:18.113052 kernel: vgaarb: loaded Dec 13 01:16:18.113070 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:16:18.113088 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:18.113107 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:18.113131 kernel: pnp: PnP ACPI init Dec 13 01:16:18.113151 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:16:18.113170 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:16:18.113189 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:18.113208 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:16:18.113226 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:16:18.113245 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:18.113264 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:18.113282 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:16:18.113304 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:16:18.113324 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.113343 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:18.113363 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:18.113382 kernel: NET: Registered PF_XDP protocol family Dec 13 01:16:18.113580 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:16:18.113754 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:16:18.115878 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:16:18.116059 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:18.116251 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:16:18.116278 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:18.116299 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:16:18.116319 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:16:18.116339 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:16:18.116358 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:18.116378 kernel: clocksource: Switched to clocksource tsc Dec 13 01:16:18.116403 kernel: Initialise system trusted keyrings Dec 13 01:16:18.116423 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:16:18.116443 kernel: Key type asymmetric registered Dec 13 01:16:18.116462 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:18.116481 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:16:18.116501 kernel: io scheduler mq-deadline registered Dec 13 01:16:18.116522 kernel: io scheduler kyber registered Dec 13 01:16:18.116541 kernel: io scheduler bfq registered Dec 13 01:16:18.116561 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:16:18.116594 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:16:18.118140 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:16:18.118177 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:16:18.118389 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:16:18.118416 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:16:18.118615 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:16:18.118642 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:18.118663 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:16:18.118683 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:16:18.118709 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:16:18.118727 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:16:18.120017 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:16:18.120052 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:16:18.120072 kernel: i8042: Warning: Keylock active Dec 13 01:16:18.120092 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:16:18.120112 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:16:18.120303 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:16:18.120484 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:16:18.120663 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:16:17 UTC (1734052577) Dec 13 01:16:18.121886 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:16:18.121918 kernel: intel_pstate: CPU model not supported Dec 13 01:16:18.121939 kernel: pstore: Using crash dump compression: deflate Dec 13 01:16:18.121959 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:16:18.121979 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:18.121998 kernel: Segment Routing with IPv6 Dec 13 01:16:18.122023 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:18.122043 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:18.122063 kernel: Key type dns_resolver registered Dec 13 01:16:18.122082 kernel: IPI shorthand broadcast: enabled Dec 13 01:16:18.122103 kernel: sched_clock: Marking stable (835004162, 136836632)->(1006417082, -34576288) Dec 13 01:16:18.122123 kernel: registered taskstats version 1 Dec 13 01:16:18.122142 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:18.122162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:16:18.122181 kernel: Key type .fscrypt registered Dec 13 01:16:18.122204 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:18.122224 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:18.122244 kernel: ima: No architecture policies found Dec 13 01:16:18.122264 kernel: clk: Disabling unused clocks Dec 13 01:16:18.122283 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:16:18.122303 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:16:18.122323 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:16:18.122343 kernel: Run /init as init process Dec 13 01:16:18.122366 kernel: with arguments: Dec 13 01:16:18.122386 kernel: /init Dec 13 01:16:18.122405 kernel: with environment: Dec 13 01:16:18.122424 kernel: HOME=/ Dec 13 01:16:18.122443 kernel: TERM=linux Dec 13 01:16:18.122463 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:18.122483 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:16:18.122506 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:18.122533 systemd[1]: Detected virtualization google. Dec 13 01:16:18.122554 systemd[1]: Detected architecture x86-64. Dec 13 01:16:18.122582 systemd[1]: Running in initrd. Dec 13 01:16:18.122603 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:18.122622 systemd[1]: Hostname set to . Dec 13 01:16:18.122644 systemd[1]: Initializing machine ID from random generator. Dec 13 01:16:18.122665 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:18.122686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:18.122710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:18.122732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:18.122753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:18.124797 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:18.124833 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:18.124855 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:18.124874 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:18.124903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:18.124922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:18.124960 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:18.124984 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:18.125005 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:18.125027 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:18.125051 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:18.125073 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:18.125094 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:18.125114 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:18.125135 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:18.125155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:18.125176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:18.125197 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:18.125222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:18.125243 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:18.125263 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:18.125284 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:18.125305 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:18.125326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:18.125346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:18.125366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:18.125423 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:16:18.125473 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:18.125494 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:18.125520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:18.125541 systemd-journald[183]: Journal started Dec 13 01:16:18.125589 systemd-journald[183]: Runtime Journal (/run/log/journal/482008b29c1741bba097202edd5cff71) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:16:18.110003 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:16:18.136798 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:18.141905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:18.151413 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:18.162927 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:18.162016 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:18.166917 kernel: Bridge firewalling registered Dec 13 01:16:18.165306 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:16:18.184582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:18.189034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:18.192062 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:18.206973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:18.211833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:18.226158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:18.230967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:18.236999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:18.241188 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:18.250595 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:18.281130 dracut-cmdline[218]: dracut-dracut-053 Dec 13 01:16:18.285761 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:18.291454 systemd-resolved[216]: Positive Trust Anchors: Dec 13 01:16:18.291466 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:18.291536 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:18.296200 systemd-resolved[216]: Defaulting to hostname 'linux'. Dec 13 01:16:18.297892 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:18.329013 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:18.383816 kernel: SCSI subsystem initialized Dec 13 01:16:18.394834 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:18.405823 kernel: iscsi: registered transport (tcp) Dec 13 01:16:18.429023 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:18.429106 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:18.479678 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:18.485007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:18.522919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:18.522997 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:18.523028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:18.567811 kernel: raid6: avx2x4 gen() 18292 MB/s Dec 13 01:16:18.584804 kernel: raid6: avx2x2 gen() 18213 MB/s Dec 13 01:16:18.602152 kernel: raid6: avx2x1 gen() 14103 MB/s Dec 13 01:16:18.602184 kernel: raid6: using algorithm avx2x4 gen() 18292 MB/s Dec 13 01:16:18.620184 kernel: raid6: .... xor() 7781 MB/s, rmw enabled Dec 13 01:16:18.620235 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:16:18.642809 kernel: xor: automatically using best checksumming function avx Dec 13 01:16:18.813823 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:18.826840 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:18.835981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:18.852984 systemd-udevd[400]: Using default interface naming scheme 'v255'. Dec 13 01:16:18.860042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:18.873994 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:18.906518 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 13 01:16:18.942058 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:18.949013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:19.038583 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:19.048964 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:19.089756 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:19.100559 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:19.108891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:19.116918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:19.130043 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:19.160004 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:16:19.164839 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:19.202799 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:16:19.202873 kernel: AES CTR mode by8 optimization enabled Dec 13 01:16:19.203811 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:16:19.211952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:19.258030 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:16:19.212173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:19.221715 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:19.245097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:19.245383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:19.249930 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:19.261759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:19.313572 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:16:19.328898 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:16:19.329161 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:16:19.329396 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:16:19.329626 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:16:19.329901 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:19.329930 kernel: GPT:17805311 != 25165823 Dec 13 01:16:19.329962 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:19.329986 kernel: GPT:17805311 != 25165823 Dec 13 01:16:19.330009 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:19.330033 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:19.330058 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:16:19.317098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:19.330684 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:19.365036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:19.390797 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Dec 13 01:16:19.390866 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (449) Dec 13 01:16:19.403325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:16:19.421417 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:16:19.428376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:16:19.433052 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:16:19.445329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:16:19.451164 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:19.474569 disk-uuid[549]: Primary Header is updated. Dec 13 01:16:19.474569 disk-uuid[549]: Secondary Entries is updated. Dec 13 01:16:19.474569 disk-uuid[549]: Secondary Header is updated. Dec 13 01:16:19.489817 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:19.512805 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:19.520819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:20.523601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:20.523679 disk-uuid[550]: The operation has completed successfully. Dec 13 01:16:20.602322 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:20.602465 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:20.630971 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:20.664005 sh[567]: Success Dec 13 01:16:20.688866 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:16:20.767879 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:20.774670 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:20.801270 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:20.842391 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:16:20.842480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:20.842523 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:20.851828 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:20.858652 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:20.896821 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:16:20.901729 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:20.902651 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:20.907977 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:20.921934 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:20.979787 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:20.979840 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:20.979868 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:20.997865 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:20.997943 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:21.011710 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:21.029932 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:21.038376 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:21.064034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:21.139704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:21.145060 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:21.264970 ignition[683]: Ignition 2.19.0 Dec 13 01:16:21.265410 ignition[683]: Stage: fetch-offline Dec 13 01:16:21.266753 systemd-networkd[750]: lo: Link UP Dec 13 01:16:21.265479 ignition[683]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.266760 systemd-networkd[750]: lo: Gained carrier Dec 13 01:16:21.265497 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.268273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:21.265680 ignition[683]: parsed url from cmdline: "" Dec 13 01:16:21.269154 systemd-networkd[750]: Enumeration completed Dec 13 01:16:21.265688 ignition[683]: no config URL provided Dec 13 01:16:21.269910 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:21.265698 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:21.269917 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:21.265713 ignition[683]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:21.271712 systemd-networkd[750]: eth0: Link UP Dec 13 01:16:21.265725 ignition[683]: failed to fetch config: resource requires networking Dec 13 01:16:21.271717 systemd-networkd[750]: eth0: Gained carrier Dec 13 01:16:21.266057 ignition[683]: Ignition finished successfully Dec 13 01:16:21.271726 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:21.357707 ignition[759]: Ignition 2.19.0 Dec 13 01:16:21.284847 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.87/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:16:21.357718 ignition[759]: Stage: fetch Dec 13 01:16:21.289168 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:21.357945 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.296260 systemd[1]: Reached target network.target - Network. Dec 13 01:16:21.357957 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.316977 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:16:21.358071 ignition[759]: parsed url from cmdline: "" Dec 13 01:16:21.368679 unknown[759]: fetched base config from "system" Dec 13 01:16:21.358078 ignition[759]: no config URL provided Dec 13 01:16:21.368692 unknown[759]: fetched base config from "system" Dec 13 01:16:21.358085 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:21.368709 unknown[759]: fetched user config from "gcp" Dec 13 01:16:21.358095 ignition[759]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:21.372509 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:16:21.358117 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:16:21.405085 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:21.361743 ignition[759]: GET result: OK Dec 13 01:16:21.454960 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:21.361865 ignition[759]: parsing config with SHA512: a2d42cf3ad98c8fe902dbe73e55c4250d5fd583d700dfa93a828bc9119716e66cea6b16b76484ac220f4b506f07a4ddad23ba05df0eae53272302fd7c008e9be Dec 13 01:16:21.477041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:21.370244 ignition[759]: fetch: fetch complete Dec 13 01:16:21.528589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:21.370251 ignition[759]: fetch: fetch passed Dec 13 01:16:21.530191 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:21.370323 ignition[759]: Ignition finished successfully Dec 13 01:16:21.559042 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:21.452210 ignition[765]: Ignition 2.19.0 Dec 13 01:16:21.567048 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:21.452219 ignition[765]: Stage: kargs Dec 13 01:16:21.585068 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:21.452441 ignition[765]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.602055 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:21.452453 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.634057 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:21.453759 ignition[765]: kargs: kargs passed Dec 13 01:16:21.453838 ignition[765]: Ignition finished successfully Dec 13 01:16:21.526265 ignition[771]: Ignition 2.19.0 Dec 13 01:16:21.526274 ignition[771]: Stage: disks Dec 13 01:16:21.526472 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:21.526485 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:21.527595 ignition[771]: disks: disks passed Dec 13 01:16:21.527653 ignition[771]: Ignition finished successfully Dec 13 01:16:21.686616 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:16:21.864438 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:21.892903 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:22.005855 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:22.006961 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:22.007769 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:22.030946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:22.059914 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:22.106062 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Dec 13 01:16:22.106099 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:22.106116 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:22.106130 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:22.097995 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:22.145015 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:22.145068 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:22.098076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:22.098123 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:22.130945 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:22.153320 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:22.184996 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:22.318981 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:22.329089 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:22.339904 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:22.350880 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:22.487454 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:22.490958 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:22.528816 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:22.538079 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:22.547996 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:22.592967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:22.602026 ignition[902]: INFO : Ignition 2.19.0 Dec 13 01:16:22.602026 ignition[902]: INFO : Stage: mount Dec 13 01:16:22.602026 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:22.602026 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:22.602026 ignition[902]: INFO : mount: mount passed Dec 13 01:16:22.602026 ignition[902]: INFO : Ignition finished successfully Dec 13 01:16:22.613196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:22.636947 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:22.680465 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:22.736823 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (914) Dec 13 01:16:22.754685 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:22.754786 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:22.754815 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:22.776159 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:22.776242 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:22.779419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:22.818385 ignition[931]: INFO : Ignition 2.19.0 Dec 13 01:16:22.818385 ignition[931]: INFO : Stage: files Dec 13 01:16:22.832924 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:22.832924 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:22.832924 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:22.832924 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:22.832924 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:22.832335 unknown[931]: wrote ssh authorized keys file for user: core Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:22.931941 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:22.996894 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:16:22.946927 systemd-networkd[750]: eth0: Gained IPv6LL Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.140569 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:16:23.421173 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:16:23.807566 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:23.807566 ignition[931]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:23.846941 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:23.846941 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:23.846941 ignition[931]: INFO : files: files passed Dec 13 01:16:23.846941 ignition[931]: INFO : Ignition finished successfully Dec 13 01:16:23.811853 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:23.843040 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:23.873005 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:23.926459 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:24.120930 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:24.120930 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:23.926763 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:24.187950 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:23.954337 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:23.982220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:24.006993 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:24.115443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:24.115562 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:24.132225 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:24.146142 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:24.178115 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:24.185963 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:24.235120 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:24.262999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:24.296329 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:24.316899 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:24.338980 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:24.357003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:24.357120 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:24.390935 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:24.408932 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:24.425930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:24.442921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:24.460898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:24.479914 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:24.496923 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:24.514938 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:24.532918 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:24.550966 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:24.567925 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:24.568045 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:24.598884 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:24.616906 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:24.634923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:24.635011 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:24.652938 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:24.653057 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:24.778956 ignition[983]: INFO : Ignition 2.19.0 Dec 13 01:16:24.778956 ignition[983]: INFO : Stage: umount Dec 13 01:16:24.778956 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:24.778956 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:24.778956 ignition[983]: INFO : umount: umount passed Dec 13 01:16:24.778956 ignition[983]: INFO : Ignition finished successfully Dec 13 01:16:24.681008 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:24.681183 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:24.702003 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:24.702093 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:24.729915 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:24.758914 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:24.759031 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:24.764925 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:24.786908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:24.787127 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:24.837131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:24.837217 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:24.865770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:24.866609 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:24.866717 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:24.886448 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:24.886561 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:24.905338 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:24.905446 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:24.915543 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:24.915656 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:24.930168 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:24.930234 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:24.947174 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:16:24.947239 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:16:24.982138 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:24.992070 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:24.992169 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:25.009178 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:25.027108 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:25.030894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:25.044157 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:25.062130 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:25.077153 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:25.077221 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:25.105095 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:25.105162 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:25.113131 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:25.113202 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:25.140169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:25.140252 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:25.148170 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:25.148241 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:25.165487 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:25.169848 systemd-networkd[750]: eth0: DHCPv6 lease lost Dec 13 01:16:25.192105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:25.201523 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:25.201651 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:25.229441 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:25.229758 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:25.237695 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:25.237747 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:25.260897 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:25.287874 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:25.288076 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:25.298146 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:25.298208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:25.327133 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:25.764894 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:25.327210 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:25.345116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:25.345191 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:25.366214 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:25.386228 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:25.386444 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:25.417318 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:25.417431 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:25.436658 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:25.436742 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:25.452969 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:25.453042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:25.470947 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:25.471043 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:25.498910 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:25.499016 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:25.525909 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:25.526022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:25.560998 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:25.565123 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:25.565201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:25.600136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:25.600199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:25.629566 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:25.629676 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:25.650651 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:25.681997 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:25.723984 systemd[1]: Switching root. Dec 13 01:16:26.040891 systemd-journald[183]: Journal stopped Dec 13 01:16:28.397964 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:16:28.398020 kernel: SELinux: policy capability open_perms=1 Dec 13 01:16:28.398044 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:16:28.398062 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:16:28.398079 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:16:28.398097 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:16:28.398119 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:16:28.398142 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:16:28.398162 kernel: audit: type=1403 audit(1734052586.418:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:16:28.398208 systemd[1]: Successfully loaded SELinux policy in 80.441ms. Dec 13 01:16:28.398234 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.467ms. Dec 13 01:16:28.398257 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:28.398277 systemd[1]: Detected virtualization google. Dec 13 01:16:28.398298 systemd[1]: Detected architecture x86-64. Dec 13 01:16:28.398325 systemd[1]: Detected first boot. Dec 13 01:16:28.398349 systemd[1]: Initializing machine ID from random generator. Dec 13 01:16:28.398369 zram_generator::config[1041]: No configuration found. Dec 13 01:16:28.398392 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:16:28.398415 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:16:28.398441 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:16:28.398463 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:16:28.398485 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:16:28.398631 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:16:28.398659 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:16:28.398683 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:16:28.398706 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:16:28.398734 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:16:28.398758 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:16:28.398798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:28.398820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:28.398842 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:16:28.398864 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:16:28.398884 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:16:28.398911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:28.398948 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:16:28.398970 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:28.398991 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:16:28.399012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:28.399033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:28.399055 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:28.399083 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:28.399105 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:16:28.399126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:16:28.399152 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:28.399177 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:28.399199 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:28.399223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:28.399247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:28.399270 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:16:28.399293 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:16:28.399322 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:16:28.399346 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:16:28.399368 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:28.399391 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:16:28.399421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:16:28.399445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:16:28.399470 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:16:28.399494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:28.399518 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:28.399538 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:16:28.399557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:28.399578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:28.399627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:28.399666 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:16:28.399690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:28.399713 kernel: ACPI: bus type drm_connector registered Dec 13 01:16:28.399747 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:16:28.399799 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:16:28.399827 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:16:28.399851 kernel: fuse: init (API version 7.39) Dec 13 01:16:28.399874 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:28.399903 kernel: loop: module loaded Dec 13 01:16:28.399924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:28.399986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:16:28.400050 systemd-journald[1146]: Collecting audit messages is disabled. Dec 13 01:16:28.400104 systemd-journald[1146]: Journal started Dec 13 01:16:28.400148 systemd-journald[1146]: Runtime Journal (/run/log/journal/7c27f21b9a9647398dd1be837b69aa69) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:16:28.420827 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:16:28.451807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:28.478800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:28.488806 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:28.501408 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:16:28.511148 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:16:28.521106 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:16:28.531096 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:16:28.542236 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:16:28.553123 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:16:28.563434 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:16:28.575298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:28.587197 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:16:28.587438 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:16:28.599228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:28.599466 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:28.611210 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:28.611444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:28.621228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:28.621466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:28.633239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:16:28.633480 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:16:28.643239 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:28.643484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:28.654350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:28.665294 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:16:28.677252 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:16:28.689267 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:28.713102 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:16:28.734903 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:16:28.749921 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:16:28.759912 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:16:28.767004 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:16:28.783812 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:16:28.794974 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:28.801558 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:16:28.809888 systemd-journald[1146]: Time spent on flushing to /var/log/journal/7c27f21b9a9647398dd1be837b69aa69 is 70.698ms for 916 entries. Dec 13 01:16:28.809888 systemd-journald[1146]: System Journal (/var/log/journal/7c27f21b9a9647398dd1be837b69aa69) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:16:28.900810 systemd-journald[1146]: Received client request to flush runtime journal. Dec 13 01:16:28.818072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:28.828960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:28.849021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:28.871986 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:16:28.892418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:16:28.904502 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:16:28.917462 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:16:28.929451 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:16:28.941815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:28.945624 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Dec 13 01:16:28.945658 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Dec 13 01:16:28.956419 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:28.975160 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:16:28.977061 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:16:28.994981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:16:29.061757 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:16:29.083060 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:29.108356 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Dec 13 01:16:29.108845 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Dec 13 01:16:29.117695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:29.621191 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:16:29.640014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:29.671046 systemd-udevd[1209]: Using default interface naming scheme 'v255'. Dec 13 01:16:29.711661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:29.735009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:29.772936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:16:29.793834 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:16:29.867817 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1225) Dec 13 01:16:29.879806 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1225) Dec 13 01:16:29.919982 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:16:29.988828 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:16:30.020974 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 01:16:30.041608 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:16:30.041643 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 01:16:30.041671 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:16:30.060263 systemd-networkd[1220]: lo: Link UP Dec 13 01:16:30.061020 systemd-networkd[1220]: lo: Gained carrier Dec 13 01:16:30.065012 systemd-networkd[1220]: Enumeration completed Dec 13 01:16:30.065199 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:30.066200 systemd-networkd[1220]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:30.066212 systemd-networkd[1220]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:30.067150 systemd-networkd[1220]: eth0: Link UP Dec 13 01:16:30.067247 systemd-networkd[1220]: eth0: Gained carrier Dec 13 01:16:30.067274 systemd-networkd[1220]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:30.075885 systemd-networkd[1220]: eth0: DHCPv4 address 10.128.0.87/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:16:30.097105 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1224) Dec 13 01:16:30.092184 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:16:30.133853 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 01:16:30.166827 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:16:30.196857 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:16:30.229979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:16:30.247177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:30.265467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:16:30.284991 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:16:30.302671 lvm[1253]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:30.334441 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:16:30.335102 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:30.343086 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:16:30.353044 lvm[1256]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:30.384539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:30.396465 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:16:30.409461 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:30.420929 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:16:30.420980 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:30.430915 systemd[1]: Reached target machines.target - Containers. Dec 13 01:16:30.440332 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:16:30.456983 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:16:30.474985 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:16:30.485072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:30.493001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:16:30.511063 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:16:30.514451 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:16:30.534045 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:16:30.546253 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:16:30.574343 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:16:30.576701 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:16:30.578406 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:16:30.652960 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:16:30.681822 kernel: loop1: detected capacity change from 0 to 54824 Dec 13 01:16:30.747827 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:16:30.840855 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:16:30.922112 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 01:16:30.957886 kernel: loop5: detected capacity change from 0 to 54824 Dec 13 01:16:30.989825 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:16:31.040060 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 01:16:31.079569 (sd-merge)[1281]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 13 01:16:31.080510 (sd-merge)[1281]: Merged extensions into '/usr'. Dec 13 01:16:31.087648 systemd[1]: Reloading requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:16:31.087674 systemd[1]: Reloading... Dec 13 01:16:31.165803 zram_generator::config[1305]: No configuration found. Dec 13 01:16:31.267209 systemd-networkd[1220]: eth0: Gained IPv6LL Dec 13 01:16:31.353711 ldconfig[1264]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:16:31.395190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:31.476661 systemd[1]: Reloading finished in 388 ms. Dec 13 01:16:31.496685 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:16:31.508428 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:16:31.518319 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:16:31.539037 systemd[1]: Starting ensure-sysext.service... Dec 13 01:16:31.555031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:31.569933 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:16:31.569960 systemd[1]: Reloading... Dec 13 01:16:31.598388 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:16:31.599686 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:16:31.601727 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:16:31.602511 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Dec 13 01:16:31.602765 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Dec 13 01:16:31.608006 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:31.608169 systemd-tmpfiles[1360]: Skipping /boot Dec 13 01:16:31.628660 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:31.628905 systemd-tmpfiles[1360]: Skipping /boot Dec 13 01:16:31.694810 zram_generator::config[1387]: No configuration found. Dec 13 01:16:31.846144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:31.928129 systemd[1]: Reloading finished in 357 ms. Dec 13 01:16:31.959697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:31.988147 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:32.007235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:16:32.025430 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:16:32.044133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:32.045065 augenrules[1453]: No rules Dec 13 01:16:32.062014 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:16:32.074549 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:32.097669 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.098212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:32.113059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:32.132801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:32.155182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:32.165062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:32.165399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.171186 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:16:32.172568 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:16:32.196091 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:16:32.208057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:32.208333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:32.220135 systemd-resolved[1455]: Positive Trust Anchors: Dec 13 01:16:32.220633 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:32.220697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:32.220936 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:32.221002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:32.226003 systemd-resolved[1455]: Defaulting to hostname 'linux'. Dec 13 01:16:32.233427 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:32.243585 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:32.243899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:32.262134 systemd[1]: Reached target network.target - Network. Dec 13 01:16:32.271038 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:16:32.281073 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:32.292015 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.292364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:32.299132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:32.314107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:32.334175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:32.345027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:32.354136 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:16:32.363898 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:32.364196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.370542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:32.370852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:32.382840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:32.383132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:32.395551 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:32.395847 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:32.406649 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:16:32.424350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.424811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:32.435133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:32.450134 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:32.472159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:32.492165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:32.511229 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:16:32.520070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:32.520768 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:16:32.531093 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:32.531278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.534583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:32.535062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:32.547955 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:32.548256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:32.558574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:32.558885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:32.570550 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:32.570844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:32.593623 systemd[1]: Finished ensure-sysext.service. Dec 13 01:16:32.602616 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:16:32.625191 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 13 01:16:32.636980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:32.637076 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:32.647122 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:16:32.658031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:16:32.669125 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:16:32.679085 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:16:32.689926 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:16:32.700941 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:16:32.701075 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:32.709899 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:32.720095 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:16:32.731743 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:16:32.740313 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:32.742491 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 13 01:16:32.754183 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:16:32.766820 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:16:32.776902 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:32.786915 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:32.795174 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:16:32.795251 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:32.795287 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:32.800941 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:16:32.825015 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:16:32.842182 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:16:32.852968 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:16:32.884618 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:16:32.894929 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:16:32.896276 jq[1531]: false Dec 13 01:16:32.910107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:32.926258 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:16:32.943930 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:16:32.954515 coreos-metadata[1528]: Dec 13 01:16:32.950 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 01:16:32.958912 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:16:32.960927 coreos-metadata[1528]: Dec 13 01:16:32.960 INFO Fetch successful Dec 13 01:16:32.960927 coreos-metadata[1528]: Dec 13 01:16:32.960 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 01:16:32.961195 extend-filesystems[1532]: Found loop4 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found loop5 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found loop6 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found loop7 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda1 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda2 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda3 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found usr Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda4 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda6 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda7 Dec 13 01:16:32.973397 extend-filesystems[1532]: Found sda9 Dec 13 01:16:32.973397 extend-filesystems[1532]: Checking size of /dev/sda9 Dec 13 01:16:33.118624 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 01:16:33.118695 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 01:16:33.118734 coreos-metadata[1528]: Dec 13 01:16:32.962 INFO Fetch successful Dec 13 01:16:33.118734 coreos-metadata[1528]: Dec 13 01:16:32.962 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 01:16:33.118734 coreos-metadata[1528]: Dec 13 01:16:32.964 INFO Fetch successful Dec 13 01:16:33.118734 coreos-metadata[1528]: Dec 13 01:16:32.964 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 01:16:33.118734 coreos-metadata[1528]: Dec 13 01:16:32.968 INFO Fetch successful Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: ---------------------------------------------------- Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: corporation. Support and training for ntp-4 are Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: available at https://www.nwtime.org/support Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: ---------------------------------------------------- Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: proto: precision = 0.073 usec (-24) Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: basedate set to 2024-11-30 Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: gps base set to 2024-12-01 (week 2343) Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Listen normally on 3 eth0 10.128.0.87:123 Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Listen normally on 4 lo [::1]:123 Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:57%2]:123 Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: Listening on routing socket on fd #22 for interface updates Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:33.119037 ntpd[1539]: 13 Dec 01:16:33 ntpd[1539]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:33.014455 dbus-daemon[1530]: [system] SELinux support is enabled Dec 13 01:16:32.979069 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 13 01:16:33.123632 extend-filesystems[1532]: Resized partition /dev/sda9 Dec 13 01:16:33.030489 dbus-daemon[1530]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1220 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:16:33.036930 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:16:33.141755 extend-filesystems[1555]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:16:33.141755 extend-filesystems[1555]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:16:33.141755 extend-filesystems[1555]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 01:16:33.141755 extend-filesystems[1555]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 01:16:33.042102 ntpd[1539]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:16:33.056002 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:16:33.198296 extend-filesystems[1532]: Resized filesystem in /dev/sda9 Dec 13 01:16:33.210999 init.sh[1543]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 01:16:33.210999 init.sh[1543]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 01:16:33.210999 init.sh[1543]: + /usr/bin/google_instance_setup Dec 13 01:16:33.042136 ntpd[1539]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:16:33.087091 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:16:33.042151 ntpd[1539]: ---------------------------------------------------- Dec 13 01:16:33.109013 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:16:33.042164 ntpd[1539]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:16:33.130528 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 01:16:33.042180 ntpd[1539]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:16:33.147324 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:16:33.042197 ntpd[1539]: corporation. Support and training for ntp-4 are Dec 13 01:16:33.168961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:16:33.217425 jq[1581]: true Dec 13 01:16:33.042210 ntpd[1539]: available at https://www.nwtime.org/support Dec 13 01:16:33.203563 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:16:33.042223 ntpd[1539]: ---------------------------------------------------- Dec 13 01:16:33.048694 ntpd[1539]: proto: precision = 0.073 usec (-24) Dec 13 01:16:33.051040 ntpd[1539]: basedate set to 2024-11-30 Dec 13 01:16:33.051066 ntpd[1539]: gps base set to 2024-12-01 (week 2343) Dec 13 01:16:33.067376 ntpd[1539]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:16:33.068256 ntpd[1539]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:16:33.073926 ntpd[1539]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:16:33.073986 ntpd[1539]: Listen normally on 3 eth0 10.128.0.87:123 Dec 13 01:16:33.074050 ntpd[1539]: Listen normally on 4 lo [::1]:123 Dec 13 01:16:33.074115 ntpd[1539]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:57%2]:123 Dec 13 01:16:33.074170 ntpd[1539]: Listening on routing socket on fd #22 for interface updates Dec 13 01:16:33.081864 ntpd[1539]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:33.081903 ntpd[1539]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:33.247809 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1580) Dec 13 01:16:33.251434 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:16:33.254883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:16:33.255396 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:16:33.255758 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:16:33.284436 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:16:33.284872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:16:33.297649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:16:33.311144 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:16:33.311541 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:16:33.345027 update_engine[1577]: I20241213 01:16:33.334715 1577 main.cc:92] Flatcar Update Engine starting Dec 13 01:16:33.357840 jq[1594]: true Dec 13 01:16:33.373805 update_engine[1577]: I20241213 01:16:33.373039 1577 update_check_scheduler.cc:74] Next update check in 8m39s Dec 13 01:16:33.390416 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:16:33.404754 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:16:33.456437 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:16:33.483006 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:16:33.495747 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:16:33.496563 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:16:33.496626 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:16:33.515989 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:16:33.525964 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:16:33.526005 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:16:33.539600 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:16:33.551020 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:16:33.577385 tar[1592]: linux-amd64/helm Dec 13 01:16:33.642266 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:33.643409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:16:33.670519 systemd-logind[1571]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:16:33.670554 systemd-logind[1571]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 01:16:33.670585 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:16:33.677994 systemd[1]: Starting sshkeys.service... Dec 13 01:16:33.681546 systemd-logind[1571]: New seat seat0. Dec 13 01:16:33.705983 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:16:33.781030 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:16:33.804347 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:16:33.883394 coreos-metadata[1638]: Dec 13 01:16:33.883 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 01:16:33.883394 coreos-metadata[1638]: Dec 13 01:16:33.883 INFO Fetch failed with 404: resource not found Dec 13 01:16:33.884008 coreos-metadata[1638]: Dec 13 01:16:33.883 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 01:16:33.888799 coreos-metadata[1638]: Dec 13 01:16:33.887 INFO Fetch successful Dec 13 01:16:33.888799 coreos-metadata[1638]: Dec 13 01:16:33.887 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 01:16:33.888799 coreos-metadata[1638]: Dec 13 01:16:33.887 INFO Fetch failed with 404: resource not found Dec 13 01:16:33.888799 coreos-metadata[1638]: Dec 13 01:16:33.887 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 01:16:33.888799 coreos-metadata[1638]: Dec 13 01:16:33.887 INFO Fetch failed with 404: resource not found Dec 13 01:16:33.888799 coreos-metadata[1638]: Dec 13 01:16:33.887 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 01:16:33.894126 coreos-metadata[1638]: Dec 13 01:16:33.891 INFO Fetch successful Dec 13 01:16:33.901712 unknown[1638]: wrote ssh authorized keys file for user: core Dec 13 01:16:33.993818 update-ssh-keys[1645]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:33.994565 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:16:34.010417 systemd[1]: Finished sshkeys.service. Dec 13 01:16:34.060928 locksmithd[1626]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:16:34.066107 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:16:34.066410 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:16:34.070086 dbus-daemon[1530]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1623 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:16:34.088816 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:16:34.205213 polkitd[1658]: Started polkitd version 121 Dec 13 01:16:34.227465 polkitd[1658]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:16:34.227560 polkitd[1658]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:16:34.230420 polkitd[1658]: Finished loading, compiling and executing 2 rules Dec 13 01:16:34.235244 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:16:34.235506 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:16:34.237333 polkitd[1658]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:16:34.306493 systemd-resolved[1455]: System hostname changed to 'ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal'. Dec 13 01:16:34.306702 systemd-hostnamed[1623]: Hostname set to (transient) Dec 13 01:16:34.558864 containerd[1595]: time="2024-12-13T01:16:34.557308609Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:16:34.714907 containerd[1595]: time="2024-12-13T01:16:34.714816328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.722830 containerd[1595]: time="2024-12-13T01:16:34.722275044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.722830 containerd[1595]: time="2024-12-13T01:16:34.722333782Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:16:34.722830 containerd[1595]: time="2024-12-13T01:16:34.722374762Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:16:34.722830 containerd[1595]: time="2024-12-13T01:16:34.722587829Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:16:34.722830 containerd[1595]: time="2024-12-13T01:16:34.722618390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.722830 containerd[1595]: time="2024-12-13T01:16:34.722718680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.722830 containerd[1595]: time="2024-12-13T01:16:34.722741458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.727813 containerd[1595]: time="2024-12-13T01:16:34.726096429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.727813 containerd[1595]: time="2024-12-13T01:16:34.726137398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.727813 containerd[1595]: time="2024-12-13T01:16:34.726165278Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.727813 containerd[1595]: time="2024-12-13T01:16:34.726185610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.727813 containerd[1595]: time="2024-12-13T01:16:34.726307794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.727813 containerd[1595]: time="2024-12-13T01:16:34.726597789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.728968 containerd[1595]: time="2024-12-13T01:16:34.728382015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.728968 containerd[1595]: time="2024-12-13T01:16:34.728422174Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:16:34.728968 containerd[1595]: time="2024-12-13T01:16:34.728550786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:16:34.728968 containerd[1595]: time="2024-12-13T01:16:34.728638382Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:16:34.739830 containerd[1595]: time="2024-12-13T01:16:34.739221233Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:16:34.739830 containerd[1595]: time="2024-12-13T01:16:34.739320340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:16:34.739830 containerd[1595]: time="2024-12-13T01:16:34.739348237Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:16:34.739830 containerd[1595]: time="2024-12-13T01:16:34.739424411Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:16:34.739830 containerd[1595]: time="2024-12-13T01:16:34.739448286Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:16:34.739830 containerd[1595]: time="2024-12-13T01:16:34.739673099Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740615404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740801222Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740829183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740850129Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740874250Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740897180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740917855Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740940636Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740964296Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.740986113Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.741007788Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.741027906Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.741058124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.742915 containerd[1595]: time="2024-12-13T01:16:34.741090557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741112418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741134227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741155315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741177162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741199089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741221299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741242660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741266809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741287026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741307465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741328115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741353670Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741386569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741406114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.743614 containerd[1595]: time="2024-12-13T01:16:34.741424364Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741484571Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741520076Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741538611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741560425Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741577052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741597082Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741613638Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:16:34.744291 containerd[1595]: time="2024-12-13T01:16:34.741630360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.750517 containerd[1595]: time="2024-12-13T01:16:34.749005787Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:16:34.750517 containerd[1595]: time="2024-12-13T01:16:34.749132195Z" level=info msg="Connect containerd service" Dec 13 01:16:34.750517 containerd[1595]: time="2024-12-13T01:16:34.749194934Z" level=info msg="using legacy CRI server" Dec 13 01:16:34.750517 containerd[1595]: time="2024-12-13T01:16:34.749207929Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:16:34.750517 containerd[1595]: time="2024-12-13T01:16:34.749358483Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:16:34.750517 containerd[1595]: time="2024-12-13T01:16:34.750214939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751225540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751307555Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751372066Z" level=info msg="Start subscribing containerd event" Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751425708Z" level=info msg="Start recovering state" Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751513051Z" level=info msg="Start event monitor" Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751536917Z" level=info msg="Start snapshots syncer" Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751551167Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751562622Z" level=info msg="Start streaming server" Dec 13 01:16:34.752800 containerd[1595]: time="2024-12-13T01:16:34.751638540Z" level=info msg="containerd successfully booted in 0.201233s" Dec 13 01:16:34.751848 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:16:35.062848 instance-setup[1566]: INFO Running google_set_multiqueue. Dec 13 01:16:35.102532 instance-setup[1566]: INFO Set channels for eth0 to 2. Dec 13 01:16:35.111043 instance-setup[1566]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 01:16:35.114269 instance-setup[1566]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 01:16:35.114735 instance-setup[1566]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 01:16:35.117303 instance-setup[1566]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 01:16:35.117869 instance-setup[1566]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 01:16:35.122438 instance-setup[1566]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 01:16:35.122490 instance-setup[1566]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 01:16:35.124894 instance-setup[1566]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 01:16:35.152112 instance-setup[1566]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:16:35.168667 instance-setup[1566]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:16:35.176454 instance-setup[1566]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 01:16:35.176539 instance-setup[1566]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 01:16:35.185330 tar[1592]: linux-amd64/LICENSE Dec 13 01:16:35.185972 tar[1592]: linux-amd64/README.md Dec 13 01:16:35.212922 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:16:35.217824 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:16:35.238811 init.sh[1543]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 01:16:35.272231 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:16:35.295017 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:16:35.315357 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:16:35.315756 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:16:35.333586 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:16:35.373299 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:16:35.395273 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:16:35.413226 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:16:35.424240 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:16:35.443684 startup-script[1711]: INFO Starting startup scripts. Dec 13 01:16:35.449760 startup-script[1711]: INFO No startup scripts found in metadata. Dec 13 01:16:35.449859 startup-script[1711]: INFO Finished running startup scripts. Dec 13 01:16:35.470897 init.sh[1543]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 01:16:35.470897 init.sh[1543]: + daemon_pids=() Dec 13 01:16:35.471086 init.sh[1543]: + for d in accounts clock_skew network Dec 13 01:16:35.471579 init.sh[1543]: + daemon_pids+=($!) Dec 13 01:16:35.471579 init.sh[1543]: + for d in accounts clock_skew network Dec 13 01:16:35.471724 init.sh[1728]: + /usr/bin/google_accounts_daemon Dec 13 01:16:35.472144 init.sh[1543]: + daemon_pids+=($!) Dec 13 01:16:35.472144 init.sh[1543]: + for d in accounts clock_skew network Dec 13 01:16:35.472144 init.sh[1543]: + daemon_pids+=($!) Dec 13 01:16:35.472144 init.sh[1543]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 01:16:35.472144 init.sh[1543]: + /usr/bin/systemd-notify --ready Dec 13 01:16:35.473585 init.sh[1729]: + /usr/bin/google_clock_skew_daemon Dec 13 01:16:35.474051 init.sh[1730]: + /usr/bin/google_network_daemon Dec 13 01:16:35.492758 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 13 01:16:35.506949 init.sh[1543]: + wait -n 1728 1729 1730 Dec 13 01:16:35.819696 google-clock-skew[1729]: INFO Starting Google Clock Skew daemon. Dec 13 01:16:35.828186 google-clock-skew[1729]: INFO Clock drift token has changed: 0. Dec 13 01:16:35.836509 google-networking[1730]: INFO Starting Google Networking daemon. Dec 13 01:16:35.880564 groupadd[1740]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 01:16:35.885799 groupadd[1740]: group added to /etc/gshadow: name=google-sudoers Dec 13 01:16:35.940512 groupadd[1740]: new group: name=google-sudoers, GID=1000 Dec 13 01:16:35.983871 google-accounts[1728]: INFO Starting Google Accounts daemon. Dec 13 01:16:35.988015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:35.999674 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:16:36.002234 google-accounts[1728]: WARNING OS Login not installed. Dec 13 01:16:36.003866 google-accounts[1728]: INFO Creating a new user account for 0. Dec 13 01:16:36.007530 init.sh[1757]: useradd: invalid user name '0': use --badname to ignore Dec 13 01:16:36.007850 google-accounts[1728]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 01:16:36.012434 systemd[1]: Startup finished in 9.655s (kernel) + 9.671s (userspace) = 19.326s. Dec 13 01:16:36.016701 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:37.000068 systemd-resolved[1455]: Clock change detected. Flushing caches. Dec 13 01:16:37.000638 google-clock-skew[1729]: INFO Synced system time with hardware clock. Dec 13 01:16:37.391003 kubelet[1755]: E1213 01:16:37.390746 1755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:37.394063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:37.394505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:41.097178 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:16:41.109265 systemd[1]: Started sshd@0-10.128.0.87:22-147.75.109.163:44536.service - OpenSSH per-connection server daemon (147.75.109.163:44536). Dec 13 01:16:41.397313 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 44536 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:41.399275 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:41.410221 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:16:41.422263 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:16:41.426552 systemd-logind[1571]: New session 1 of user core. Dec 13 01:16:41.442026 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:16:41.456262 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:16:41.476251 (systemd)[1777]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:16:41.601712 systemd[1777]: Queued start job for default target default.target. Dec 13 01:16:41.602292 systemd[1777]: Created slice app.slice - User Application Slice. Dec 13 01:16:41.602330 systemd[1777]: Reached target paths.target - Paths. Dec 13 01:16:41.602352 systemd[1777]: Reached target timers.target - Timers. Dec 13 01:16:41.607035 systemd[1777]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:16:41.618317 systemd[1777]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:16:41.618416 systemd[1777]: Reached target sockets.target - Sockets. Dec 13 01:16:41.618440 systemd[1777]: Reached target basic.target - Basic System. Dec 13 01:16:41.618502 systemd[1777]: Reached target default.target - Main User Target. Dec 13 01:16:41.618554 systemd[1777]: Startup finished in 134ms. Dec 13 01:16:41.619655 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:16:41.634101 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:16:41.858281 systemd[1]: Started sshd@1-10.128.0.87:22-147.75.109.163:44552.service - OpenSSH per-connection server daemon (147.75.109.163:44552). Dec 13 01:16:42.150515 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 44552 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:42.152290 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:42.158711 systemd-logind[1571]: New session 2 of user core. Dec 13 01:16:42.168299 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:16:42.363582 sshd[1789]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:42.367882 systemd[1]: sshd@1-10.128.0.87:22-147.75.109.163:44552.service: Deactivated successfully. Dec 13 01:16:42.373468 systemd-logind[1571]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:16:42.374286 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:16:42.375576 systemd-logind[1571]: Removed session 2. Dec 13 01:16:42.417646 systemd[1]: Started sshd@2-10.128.0.87:22-147.75.109.163:44554.service - OpenSSH per-connection server daemon (147.75.109.163:44554). Dec 13 01:16:42.697830 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 44554 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:42.699656 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:42.706047 systemd-logind[1571]: New session 3 of user core. Dec 13 01:16:42.716365 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:16:42.905757 sshd[1797]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:42.911715 systemd[1]: sshd@2-10.128.0.87:22-147.75.109.163:44554.service: Deactivated successfully. Dec 13 01:16:42.916382 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:16:42.917576 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:16:42.919071 systemd-logind[1571]: Removed session 3. Dec 13 01:16:42.953692 systemd[1]: Started sshd@3-10.128.0.87:22-147.75.109.163:44556.service - OpenSSH per-connection server daemon (147.75.109.163:44556). Dec 13 01:16:43.233638 sshd[1805]: Accepted publickey for core from 147.75.109.163 port 44556 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:43.235386 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:43.240465 systemd-logind[1571]: New session 4 of user core. Dec 13 01:16:43.251971 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:16:43.444934 sshd[1805]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:43.450121 systemd[1]: sshd@3-10.128.0.87:22-147.75.109.163:44556.service: Deactivated successfully. Dec 13 01:16:43.454943 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:16:43.455702 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:16:43.456991 systemd-logind[1571]: Removed session 4. Dec 13 01:16:43.496256 systemd[1]: Started sshd@4-10.128.0.87:22-147.75.109.163:44566.service - OpenSSH per-connection server daemon (147.75.109.163:44566). Dec 13 01:16:43.777436 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 44566 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:43.779386 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:43.785577 systemd-logind[1571]: New session 5 of user core. Dec 13 01:16:43.795233 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:16:43.969653 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:16:43.970155 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:43.985539 sudo[1817]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:44.028463 sshd[1813]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:44.033885 systemd[1]: sshd@4-10.128.0.87:22-147.75.109.163:44566.service: Deactivated successfully. Dec 13 01:16:44.039988 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:16:44.040705 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:16:44.042390 systemd-logind[1571]: Removed session 5. Dec 13 01:16:44.085299 systemd[1]: Started sshd@5-10.128.0.87:22-147.75.109.163:44582.service - OpenSSH per-connection server daemon (147.75.109.163:44582). Dec 13 01:16:44.370536 sshd[1822]: Accepted publickey for core from 147.75.109.163 port 44582 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:44.372099 sshd[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:44.378249 systemd-logind[1571]: New session 6 of user core. Dec 13 01:16:44.385623 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:16:44.550250 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:16:44.550752 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:44.555693 sudo[1827]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:44.568775 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:16:44.569275 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:44.586280 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:44.589262 auditctl[1830]: No rules Dec 13 01:16:44.589774 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:16:44.590125 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:44.600678 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:44.630752 augenrules[1849]: No rules Dec 13 01:16:44.632137 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:44.635508 sudo[1826]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:44.680761 sshd[1822]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:44.686074 systemd[1]: sshd@5-10.128.0.87:22-147.75.109.163:44582.service: Deactivated successfully. Dec 13 01:16:44.690842 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:16:44.692013 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:16:44.693325 systemd-logind[1571]: Removed session 6. Dec 13 01:16:44.727250 systemd[1]: Started sshd@6-10.128.0.87:22-147.75.109.163:44596.service - OpenSSH per-connection server daemon (147.75.109.163:44596). Dec 13 01:16:45.021449 sshd[1858]: Accepted publickey for core from 147.75.109.163 port 44596 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:45.023665 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:45.029987 systemd-logind[1571]: New session 7 of user core. Dec 13 01:16:45.037222 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:16:45.198822 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:16:45.199325 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:45.627259 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:16:45.630426 (dockerd)[1878]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:16:46.065931 dockerd[1878]: time="2024-12-13T01:16:46.065752034Z" level=info msg="Starting up" Dec 13 01:16:46.675953 dockerd[1878]: time="2024-12-13T01:16:46.675879640Z" level=info msg="Loading containers: start." Dec 13 01:16:46.830926 kernel: Initializing XFRM netlink socket Dec 13 01:16:46.930022 systemd-networkd[1220]: docker0: Link UP Dec 13 01:16:46.952699 dockerd[1878]: time="2024-12-13T01:16:46.952644454Z" level=info msg="Loading containers: done." Dec 13 01:16:46.977725 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3618383414-merged.mount: Deactivated successfully. Dec 13 01:16:46.978617 dockerd[1878]: time="2024-12-13T01:16:46.978022443Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:16:46.978617 dockerd[1878]: time="2024-12-13T01:16:46.978152761Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:16:46.978617 dockerd[1878]: time="2024-12-13T01:16:46.978314643Z" level=info msg="Daemon has completed initialization" Dec 13 01:16:47.015417 dockerd[1878]: time="2024-12-13T01:16:47.015328152Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:16:47.016044 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:16:47.571125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:47.577579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:47.837434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:47.851492 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:47.917677 kubelet[2027]: E1213 01:16:47.917600 2027 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:47.921784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:47.922116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:48.098196 containerd[1595]: time="2024-12-13T01:16:48.098042722Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:16:48.558890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112712321.mount: Deactivated successfully. Dec 13 01:16:50.309577 containerd[1595]: time="2024-12-13T01:16:50.309507575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:50.311282 containerd[1595]: time="2024-12-13T01:16:50.311229764Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35145882" Dec 13 01:16:50.312473 containerd[1595]: time="2024-12-13T01:16:50.312396269Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:50.316161 containerd[1595]: time="2024-12-13T01:16:50.316095131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:50.318222 containerd[1595]: time="2024-12-13T01:16:50.317602805Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.219494488s" Dec 13 01:16:50.318222 containerd[1595]: time="2024-12-13T01:16:50.317659488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:16:50.350969 containerd[1595]: time="2024-12-13T01:16:50.350925954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:16:52.043297 containerd[1595]: time="2024-12-13T01:16:52.043231851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:52.044863 containerd[1595]: time="2024-12-13T01:16:52.044796009Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32219666" Dec 13 01:16:52.046215 containerd[1595]: time="2024-12-13T01:16:52.046153158Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:52.050189 containerd[1595]: time="2024-12-13T01:16:52.049635987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:52.051084 containerd[1595]: time="2024-12-13T01:16:52.051041399Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.700060646s" Dec 13 01:16:52.051188 containerd[1595]: time="2024-12-13T01:16:52.051092051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:16:52.080340 containerd[1595]: time="2024-12-13T01:16:52.080289243Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:16:53.148557 containerd[1595]: time="2024-12-13T01:16:53.148480828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:53.152396 containerd[1595]: time="2024-12-13T01:16:53.151444213Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17334738" Dec 13 01:16:53.153532 containerd[1595]: time="2024-12-13T01:16:53.153452222Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:53.160324 containerd[1595]: time="2024-12-13T01:16:53.160256941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:53.163076 containerd[1595]: time="2024-12-13T01:16:53.161883261Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.081538953s" Dec 13 01:16:53.163076 containerd[1595]: time="2024-12-13T01:16:53.161955359Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:16:53.190143 containerd[1595]: time="2024-12-13T01:16:53.190084202Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:16:54.236712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536120438.mount: Deactivated successfully. Dec 13 01:16:54.774147 containerd[1595]: time="2024-12-13T01:16:54.774085491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:54.775548 containerd[1595]: time="2024-12-13T01:16:54.775481823Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28621853" Dec 13 01:16:54.776844 containerd[1595]: time="2024-12-13T01:16:54.776776595Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:54.779620 containerd[1595]: time="2024-12-13T01:16:54.779558591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:54.780484 containerd[1595]: time="2024-12-13T01:16:54.780439869Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.590299613s" Dec 13 01:16:54.780570 containerd[1595]: time="2024-12-13T01:16:54.780491437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:16:54.811308 containerd[1595]: time="2024-12-13T01:16:54.811039418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:16:55.207387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383038971.mount: Deactivated successfully. Dec 13 01:16:56.222178 containerd[1595]: time="2024-12-13T01:16:56.222111470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:56.223942 containerd[1595]: time="2024-12-13T01:16:56.223860793Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Dec 13 01:16:56.226970 containerd[1595]: time="2024-12-13T01:16:56.225140493Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:56.229958 containerd[1595]: time="2024-12-13T01:16:56.229918789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:56.231930 containerd[1595]: time="2024-12-13T01:16:56.231872309Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.420785198s" Dec 13 01:16:56.232103 containerd[1595]: time="2024-12-13T01:16:56.232073990Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:16:56.261776 containerd[1595]: time="2024-12-13T01:16:56.261735554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:16:56.618146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507886102.mount: Deactivated successfully. Dec 13 01:16:56.627471 containerd[1595]: time="2024-12-13T01:16:56.627412485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:56.628750 containerd[1595]: time="2024-12-13T01:16:56.628680022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Dec 13 01:16:56.629936 containerd[1595]: time="2024-12-13T01:16:56.629873501Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:56.632638 containerd[1595]: time="2024-12-13T01:16:56.632575322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:56.633916 containerd[1595]: time="2024-12-13T01:16:56.633710346Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 371.927088ms" Dec 13 01:16:56.633916 containerd[1595]: time="2024-12-13T01:16:56.633758074Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:16:56.664377 containerd[1595]: time="2024-12-13T01:16:56.664336477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:16:57.077007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5308622.mount: Deactivated successfully. Dec 13 01:16:58.165137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:16:58.172204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:58.442624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:58.454673 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:58.551232 kubelet[2246]: E1213 01:16:58.551065 2246 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:58.555962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:58.556360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:59.342831 containerd[1595]: time="2024-12-13T01:16:59.342763185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:59.344542 containerd[1595]: time="2024-12-13T01:16:59.344477974Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Dec 13 01:16:59.345717 containerd[1595]: time="2024-12-13T01:16:59.345638329Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:59.351070 containerd[1595]: time="2024-12-13T01:16:59.350441415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:59.352283 containerd[1595]: time="2024-12-13T01:16:59.352227141Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.687836833s" Dec 13 01:16:59.352404 containerd[1595]: time="2024-12-13T01:16:59.352289827Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:17:03.143519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:03.150244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:03.185070 systemd[1]: Reloading requested from client PID 2321 ('systemctl') (unit session-7.scope)... Dec 13 01:17:03.185112 systemd[1]: Reloading... Dec 13 01:17:03.339982 zram_generator::config[2368]: No configuration found. Dec 13 01:17:03.501364 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:03.595888 systemd[1]: Reloading finished in 410 ms. Dec 13 01:17:03.645631 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:17:03.645783 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:17:03.646228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:03.658170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:03.873148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:03.880623 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:03.948169 kubelet[2422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:03.948603 kubelet[2422]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:03.948658 kubelet[2422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:03.950854 kubelet[2422]: I1213 01:17:03.950780 2422 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:04.609933 kubelet[2422]: I1213 01:17:04.608662 2422 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:17:04.609933 kubelet[2422]: I1213 01:17:04.608702 2422 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:04.609933 kubelet[2422]: I1213 01:17:04.609252 2422 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:17:04.637796 kubelet[2422]: E1213 01:17:04.637759 2422 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.638790 kubelet[2422]: I1213 01:17:04.638745 2422 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:04.652637 kubelet[2422]: I1213 01:17:04.652595 2422 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:04.655404 kubelet[2422]: I1213 01:17:04.655357 2422 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:04.655663 kubelet[2422]: I1213 01:17:04.655622 2422 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:04.655663 kubelet[2422]: I1213 01:17:04.655659 2422 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:04.655949 kubelet[2422]: I1213 01:17:04.655677 2422 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:04.655949 kubelet[2422]: I1213 01:17:04.655818 2422 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:04.656057 kubelet[2422]: I1213 01:17:04.655976 2422 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:17:04.656057 kubelet[2422]: I1213 01:17:04.656001 2422 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:04.656057 kubelet[2422]: I1213 01:17:04.656039 2422 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:04.656190 kubelet[2422]: I1213 01:17:04.656062 2422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:04.658410 kubelet[2422]: W1213 01:17:04.658069 2422 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.658410 kubelet[2422]: E1213 01:17:04.658138 2422 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.658410 kubelet[2422]: W1213 01:17:04.658335 2422 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.658614 kubelet[2422]: E1213 01:17:04.658425 2422 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.658614 kubelet[2422]: I1213 01:17:04.658564 2422 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:04.663202 kubelet[2422]: I1213 01:17:04.663153 2422 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:04.663305 kubelet[2422]: W1213 01:17:04.663240 2422 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:17:04.664536 kubelet[2422]: I1213 01:17:04.663993 2422 server.go:1256] "Started kubelet" Dec 13 01:17:04.666255 kubelet[2422]: I1213 01:17:04.665352 2422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:04.673611 kubelet[2422]: E1213 01:17:04.673578 2422 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal.181097acbaeddccd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,UID:ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 01:17:04.663960781 +0000 UTC m=+0.775765451,LastTimestamp:2024-12-13 01:17:04.663960781 +0000 UTC m=+0.775765451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,}" Dec 13 01:17:04.674045 kubelet[2422]: I1213 01:17:04.674019 2422 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:04.675815 kubelet[2422]: I1213 01:17:04.674990 2422 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:17:04.676545 kubelet[2422]: I1213 01:17:04.676520 2422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:04.676821 kubelet[2422]: I1213 01:17:04.676796 2422 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:04.680827 kubelet[2422]: I1213 01:17:04.680147 2422 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:04.680827 kubelet[2422]: I1213 01:17:04.680706 2422 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:17:04.684570 kubelet[2422]: I1213 01:17:04.684546 2422 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:17:04.686375 kubelet[2422]: W1213 01:17:04.685182 2422 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.686375 kubelet[2422]: E1213 01:17:04.685258 2422 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.686375 kubelet[2422]: E1213 01:17:04.685368 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.87:6443: connect: connection refused" interval="200ms" Dec 13 01:17:04.688114 kubelet[2422]: I1213 01:17:04.688092 2422 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:04.688365 kubelet[2422]: I1213 01:17:04.688337 2422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:04.689186 kubelet[2422]: E1213 01:17:04.689166 2422 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:04.691698 kubelet[2422]: I1213 01:17:04.691680 2422 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:04.705788 kubelet[2422]: I1213 01:17:04.705757 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:04.707498 kubelet[2422]: I1213 01:17:04.707453 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:04.707498 kubelet[2422]: I1213 01:17:04.707491 2422 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:04.707666 kubelet[2422]: I1213 01:17:04.707519 2422 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:17:04.707666 kubelet[2422]: E1213 01:17:04.707589 2422 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:04.728928 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:17:04.731489 kubelet[2422]: W1213 01:17:04.731423 2422 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.731600 kubelet[2422]: E1213 01:17:04.731505 2422 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:04.755678 kubelet[2422]: I1213 01:17:04.755569 2422 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:04.755678 kubelet[2422]: I1213 01:17:04.755597 2422 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:04.755678 kubelet[2422]: I1213 01:17:04.755620 2422 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:04.757973 kubelet[2422]: I1213 01:17:04.757934 2422 policy_none.go:49] "None policy: Start" Dec 13 01:17:04.758717 kubelet[2422]: I1213 01:17:04.758680 2422 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:04.758717 kubelet[2422]: I1213 01:17:04.758713 2422 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:04.764666 kubelet[2422]: I1213 01:17:04.764615 2422 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:04.764972 kubelet[2422]: I1213 01:17:04.764935 2422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:04.770966 kubelet[2422]: E1213 01:17:04.770886 2422 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" not found" Dec 13 01:17:04.786496 kubelet[2422]: I1213 01:17:04.786472 2422 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.786916 kubelet[2422]: E1213 01:17:04.786868 2422 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.87:6443/api/v1/nodes\": dial tcp 10.128.0.87:6443: connect: connection refused" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.808302 kubelet[2422]: I1213 01:17:04.808252 2422 topology_manager.go:215] "Topology Admit Handler" podUID="a8dd112c938402613cf7f599b013e8bd" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.817312 kubelet[2422]: I1213 01:17:04.817284 2422 topology_manager.go:215] "Topology Admit Handler" podUID="618345751ace652cfdf7f162baa4b542" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.825773 kubelet[2422]: I1213 01:17:04.825428 2422 topology_manager.go:215] "Topology Admit Handler" podUID="be158464b689cedcd6d5db695afaadf8" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886570 kubelet[2422]: I1213 01:17:04.886101 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886570 kubelet[2422]: I1213 01:17:04.886164 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886570 kubelet[2422]: I1213 01:17:04.886202 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886570 kubelet[2422]: I1213 01:17:04.886235 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be158464b689cedcd6d5db695afaadf8-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"be158464b689cedcd6d5db695afaadf8\") " pod="kube-system/kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886884 kubelet[2422]: I1213 01:17:04.886280 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8dd112c938402613cf7f599b013e8bd-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"a8dd112c938402613cf7f599b013e8bd\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886884 kubelet[2422]: I1213 01:17:04.886314 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8dd112c938402613cf7f599b013e8bd-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"a8dd112c938402613cf7f599b013e8bd\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886884 kubelet[2422]: I1213 01:17:04.886353 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8dd112c938402613cf7f599b013e8bd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"a8dd112c938402613cf7f599b013e8bd\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.886884 kubelet[2422]: E1213 01:17:04.886363 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.87:6443: connect: connection refused" interval="400ms" Dec 13 01:17:04.887119 kubelet[2422]: I1213 01:17:04.886393 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.887119 kubelet[2422]: I1213 01:17:04.886451 2422 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.992007 kubelet[2422]: I1213 01:17:04.991962 2422 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:04.992580 kubelet[2422]: E1213 01:17:04.992353 2422 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.87:6443/api/v1/nodes\": dial tcp 10.128.0.87:6443: connect: connection refused" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:05.129342 containerd[1595]: time="2024-12-13T01:17:05.129287533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,Uid:a8dd112c938402613cf7f599b013e8bd,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:05.138095 containerd[1595]: time="2024-12-13T01:17:05.137573582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,Uid:618345751ace652cfdf7f162baa4b542,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:05.140913 containerd[1595]: time="2024-12-13T01:17:05.139533812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,Uid:be158464b689cedcd6d5db695afaadf8,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:05.287182 kubelet[2422]: E1213 01:17:05.287141 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.87:6443: connect: connection refused" interval="800ms" Dec 13 01:17:05.398840 kubelet[2422]: I1213 01:17:05.398714 2422 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:05.399225 kubelet[2422]: E1213 01:17:05.399170 2422 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.87:6443/api/v1/nodes\": dial tcp 10.128.0.87:6443: connect: connection refused" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:05.492245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207254802.mount: Deactivated successfully. Dec 13 01:17:05.499708 containerd[1595]: time="2024-12-13T01:17:05.499650000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:05.501006 containerd[1595]: time="2024-12-13T01:17:05.500944884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Dec 13 01:17:05.502538 containerd[1595]: time="2024-12-13T01:17:05.502477004Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:05.504886 containerd[1595]: time="2024-12-13T01:17:05.504827637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:05.505008 containerd[1595]: time="2024-12-13T01:17:05.504978484Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:05.506074 containerd[1595]: time="2024-12-13T01:17:05.506022089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:05.508815 containerd[1595]: time="2024-12-13T01:17:05.508339450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:05.508815 containerd[1595]: time="2024-12-13T01:17:05.508441845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:05.508815 containerd[1595]: time="2024-12-13T01:17:05.508502217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 379.108954ms" Dec 13 01:17:05.513930 containerd[1595]: time="2024-12-13T01:17:05.513862658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 374.257256ms" Dec 13 01:17:05.517035 containerd[1595]: time="2024-12-13T01:17:05.516980496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 379.222783ms" Dec 13 01:17:05.576659 kubelet[2422]: W1213 01:17:05.576606 2422 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:05.576952 kubelet[2422]: E1213 01:17:05.576913 2422 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:05.716859 containerd[1595]: time="2024-12-13T01:17:05.716292448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:05.716859 containerd[1595]: time="2024-12-13T01:17:05.716382603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:05.716859 containerd[1595]: time="2024-12-13T01:17:05.716411440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:05.716859 containerd[1595]: time="2024-12-13T01:17:05.716543555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:05.720472 containerd[1595]: time="2024-12-13T01:17:05.720195253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:05.720472 containerd[1595]: time="2024-12-13T01:17:05.720325869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:05.720472 containerd[1595]: time="2024-12-13T01:17:05.720365457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:05.723030 containerd[1595]: time="2024-12-13T01:17:05.722150743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:05.723030 containerd[1595]: time="2024-12-13T01:17:05.722248040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:05.723030 containerd[1595]: time="2024-12-13T01:17:05.722277241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:05.723030 containerd[1595]: time="2024-12-13T01:17:05.722488677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:05.723030 containerd[1595]: time="2024-12-13T01:17:05.722854277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:05.854196 containerd[1595]: time="2024-12-13T01:17:05.853994972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,Uid:618345751ace652cfdf7f162baa4b542,Namespace:kube-system,Attempt:0,} returns sandbox id \"346a42595122abbb01516e1fa8cf2650bb59828d9e3023d9009b900b6de65468\"" Dec 13 01:17:05.860361 kubelet[2422]: E1213 01:17:05.860329 2422 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flat" Dec 13 01:17:05.866175 containerd[1595]: time="2024-12-13T01:17:05.866121825Z" level=info msg="CreateContainer within sandbox \"346a42595122abbb01516e1fa8cf2650bb59828d9e3023d9009b900b6de65468\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:17:05.872878 containerd[1595]: time="2024-12-13T01:17:05.872755229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,Uid:be158464b689cedcd6d5db695afaadf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2065f95fb6269db796efb1849a5bd4a035b577d3c34da7a23094d856493f84a2\"" Dec 13 01:17:05.875347 kubelet[2422]: E1213 01:17:05.875316 2422 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-21291" Dec 13 01:17:05.878728 containerd[1595]: time="2024-12-13T01:17:05.878610370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal,Uid:a8dd112c938402613cf7f599b013e8bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6f51318a2ec474255e7f386e537385344e9ae4f0687bbf3cab7da2c3907f3e6\"" Dec 13 01:17:05.879541 containerd[1595]: time="2024-12-13T01:17:05.879425828Z" level=info msg="CreateContainer within sandbox \"2065f95fb6269db796efb1849a5bd4a035b577d3c34da7a23094d856493f84a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:17:05.880603 kubelet[2422]: E1213 01:17:05.880315 2422 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-21291" Dec 13 01:17:05.882369 containerd[1595]: time="2024-12-13T01:17:05.882333983Z" level=info msg="CreateContainer within sandbox \"c6f51318a2ec474255e7f386e537385344e9ae4f0687bbf3cab7da2c3907f3e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:17:05.895236 containerd[1595]: time="2024-12-13T01:17:05.895180636Z" level=info msg="CreateContainer within sandbox \"346a42595122abbb01516e1fa8cf2650bb59828d9e3023d9009b900b6de65468\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a51d8721bd66897df330ce792f7d60d4181020bca2814aaf39bfa93d15157457\"" Dec 13 01:17:05.896817 containerd[1595]: time="2024-12-13T01:17:05.896670561Z" level=info msg="StartContainer for \"a51d8721bd66897df330ce792f7d60d4181020bca2814aaf39bfa93d15157457\"" Dec 13 01:17:05.903457 containerd[1595]: time="2024-12-13T01:17:05.903402366Z" level=info msg="CreateContainer within sandbox \"c6f51318a2ec474255e7f386e537385344e9ae4f0687bbf3cab7da2c3907f3e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a1f9c0911c752ae51a9beed4c4fa519975a341318fcf5bf5aca9b3a0bbb406e\"" Dec 13 01:17:05.904547 containerd[1595]: time="2024-12-13T01:17:05.904512953Z" level=info msg="StartContainer for \"0a1f9c0911c752ae51a9beed4c4fa519975a341318fcf5bf5aca9b3a0bbb406e\"" Dec 13 01:17:05.913663 containerd[1595]: time="2024-12-13T01:17:05.913619739Z" level=info msg="CreateContainer within sandbox \"2065f95fb6269db796efb1849a5bd4a035b577d3c34da7a23094d856493f84a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d712bef90d1be65e7168764a35ca9ece68f4015001f73fc093960cecd0bc0a58\"" Dec 13 01:17:05.914557 containerd[1595]: time="2024-12-13T01:17:05.914515909Z" level=info msg="StartContainer for \"d712bef90d1be65e7168764a35ca9ece68f4015001f73fc093960cecd0bc0a58\"" Dec 13 01:17:06.053773 kubelet[2422]: W1213 01:17:06.052458 2422 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:06.053773 kubelet[2422]: E1213 01:17:06.052557 2422 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:06.088172 kubelet[2422]: E1213 01:17:06.088141 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.87:6443: connect: connection refused" interval="1.6s" Dec 13 01:17:06.093991 containerd[1595]: time="2024-12-13T01:17:06.092557845Z" level=info msg="StartContainer for \"d712bef90d1be65e7168764a35ca9ece68f4015001f73fc093960cecd0bc0a58\" returns successfully" Dec 13 01:17:06.093991 containerd[1595]: time="2024-12-13T01:17:06.092686245Z" level=info msg="StartContainer for \"a51d8721bd66897df330ce792f7d60d4181020bca2814aaf39bfa93d15157457\" returns successfully" Dec 13 01:17:06.114798 containerd[1595]: time="2024-12-13T01:17:06.114359369Z" level=info msg="StartContainer for \"0a1f9c0911c752ae51a9beed4c4fa519975a341318fcf5bf5aca9b3a0bbb406e\" returns successfully" Dec 13 01:17:06.209554 kubelet[2422]: I1213 01:17:06.208209 2422 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:06.209554 kubelet[2422]: E1213 01:17:06.208593 2422 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.87:6443/api/v1/nodes\": dial tcp 10.128.0.87:6443: connect: connection refused" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:06.238935 kubelet[2422]: W1213 01:17:06.238527 2422 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:06.238935 kubelet[2422]: E1213 01:17:06.238618 2422 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.87:6443: connect: connection refused Dec 13 01:17:07.818145 kubelet[2422]: I1213 01:17:07.818098 2422 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:08.948227 kubelet[2422]: E1213 01:17:08.948107 2422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" not found" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:09.006947 kubelet[2422]: I1213 01:17:09.006560 2422 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:09.660428 kubelet[2422]: I1213 01:17:09.660123 2422 apiserver.go:52] "Watching apiserver" Dec 13 01:17:09.684591 kubelet[2422]: I1213 01:17:09.684552 2422 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:17:11.545961 systemd[1]: Reloading requested from client PID 2701 ('systemctl') (unit session-7.scope)... Dec 13 01:17:11.545984 systemd[1]: Reloading... Dec 13 01:17:11.669943 zram_generator::config[2737]: No configuration found. Dec 13 01:17:11.828489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:11.937776 systemd[1]: Reloading finished in 391 ms. Dec 13 01:17:11.979556 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:12.000096 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:12.000713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:12.008710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:12.282137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:12.293547 (kubelet)[2799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:12.379882 kubelet[2799]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:12.379882 kubelet[2799]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:12.379882 kubelet[2799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:12.380465 kubelet[2799]: I1213 01:17:12.379987 2799 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:12.387357 kubelet[2799]: I1213 01:17:12.387213 2799 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:17:12.387357 kubelet[2799]: I1213 01:17:12.387250 2799 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:12.387551 kubelet[2799]: I1213 01:17:12.387491 2799 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:17:12.390199 kubelet[2799]: I1213 01:17:12.390174 2799 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:17:12.393668 kubelet[2799]: I1213 01:17:12.393260 2799 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:12.404147 kubelet[2799]: I1213 01:17:12.404122 2799 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:12.404811 kubelet[2799]: I1213 01:17:12.404781 2799 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:12.405062 kubelet[2799]: I1213 01:17:12.405029 2799 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:12.405062 kubelet[2799]: I1213 01:17:12.405062 2799 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:12.405271 kubelet[2799]: I1213 01:17:12.405081 2799 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:12.405271 kubelet[2799]: I1213 01:17:12.405126 2799 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:12.405271 kubelet[2799]: I1213 01:17:12.405251 2799 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:17:12.405271 kubelet[2799]: I1213 01:17:12.405271 2799 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:12.405507 kubelet[2799]: I1213 01:17:12.405306 2799 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:12.405507 kubelet[2799]: I1213 01:17:12.405329 2799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:12.408921 kubelet[2799]: I1213 01:17:12.408828 2799 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:12.410092 kubelet[2799]: I1213 01:17:12.410069 2799 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:12.412926 kubelet[2799]: I1213 01:17:12.411881 2799 server.go:1256] "Started kubelet" Dec 13 01:17:12.421923 kubelet[2799]: I1213 01:17:12.417612 2799 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:12.429922 kubelet[2799]: I1213 01:17:12.429883 2799 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:12.433996 kubelet[2799]: I1213 01:17:12.433977 2799 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:12.434812 kubelet[2799]: I1213 01:17:12.434792 2799 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:17:12.435141 kubelet[2799]: I1213 01:17:12.435127 2799 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:17:12.443140 kubelet[2799]: I1213 01:17:12.442058 2799 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:17:12.445924 kubelet[2799]: I1213 01:17:12.444423 2799 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:12.445924 kubelet[2799]: I1213 01:17:12.444652 2799 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:12.449926 kubelet[2799]: I1213 01:17:12.449066 2799 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:12.449926 kubelet[2799]: I1213 01:17:12.449187 2799 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:12.459914 kubelet[2799]: E1213 01:17:12.454672 2799 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:12.459914 kubelet[2799]: I1213 01:17:12.454960 2799 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:12.488615 kubelet[2799]: I1213 01:17:12.488594 2799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:12.501284 kubelet[2799]: I1213 01:17:12.499701 2799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:12.501284 kubelet[2799]: I1213 01:17:12.499737 2799 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:12.501284 kubelet[2799]: I1213 01:17:12.499758 2799 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:17:12.501284 kubelet[2799]: E1213 01:17:12.499824 2799 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:12.543701 kubelet[2799]: I1213 01:17:12.543571 2799 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.560008 kubelet[2799]: I1213 01:17:12.559488 2799 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.560008 kubelet[2799]: I1213 01:17:12.559598 2799 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.600380 kubelet[2799]: E1213 01:17:12.600045 2799 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:17:12.604457 kubelet[2799]: I1213 01:17:12.604409 2799 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:12.604457 kubelet[2799]: I1213 01:17:12.604437 2799 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:12.604457 kubelet[2799]: I1213 01:17:12.604459 2799 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:12.604784 kubelet[2799]: I1213 01:17:12.604682 2799 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:17:12.604784 kubelet[2799]: I1213 01:17:12.604712 2799 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:17:12.604784 kubelet[2799]: I1213 01:17:12.604724 2799 policy_none.go:49] "None policy: Start" Dec 13 01:17:12.606145 kubelet[2799]: I1213 01:17:12.606053 2799 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:12.606145 kubelet[2799]: I1213 01:17:12.606091 2799 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:12.606411 kubelet[2799]: I1213 01:17:12.606373 2799 state_mem.go:75] "Updated machine memory state" Dec 13 01:17:12.608219 kubelet[2799]: I1213 01:17:12.608199 2799 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:12.611647 kubelet[2799]: I1213 01:17:12.609587 2799 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:12.801197 kubelet[2799]: I1213 01:17:12.801054 2799 topology_manager.go:215] "Topology Admit Handler" podUID="a8dd112c938402613cf7f599b013e8bd" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.801197 kubelet[2799]: I1213 01:17:12.801188 2799 topology_manager.go:215] "Topology Admit Handler" podUID="618345751ace652cfdf7f162baa4b542" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.801403 kubelet[2799]: I1213 01:17:12.801249 2799 topology_manager.go:215] "Topology Admit Handler" podUID="be158464b689cedcd6d5db695afaadf8" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.820206 kubelet[2799]: W1213 01:17:12.817151 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:17:12.820960 kubelet[2799]: W1213 01:17:12.820505 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:17:12.820960 kubelet[2799]: W1213 01:17:12.820532 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:17:12.838147 kubelet[2799]: I1213 01:17:12.836871 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838147 kubelet[2799]: I1213 01:17:12.836948 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be158464b689cedcd6d5db695afaadf8-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"be158464b689cedcd6d5db695afaadf8\") " pod="kube-system/kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838147 kubelet[2799]: I1213 01:17:12.836987 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8dd112c938402613cf7f599b013e8bd-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"a8dd112c938402613cf7f599b013e8bd\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838147 kubelet[2799]: I1213 01:17:12.837031 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8dd112c938402613cf7f599b013e8bd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"a8dd112c938402613cf7f599b013e8bd\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838367 kubelet[2799]: I1213 01:17:12.837069 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838367 kubelet[2799]: I1213 01:17:12.837110 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838367 kubelet[2799]: I1213 01:17:12.837166 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8dd112c938402613cf7f599b013e8bd-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"a8dd112c938402613cf7f599b013e8bd\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838367 kubelet[2799]: I1213 01:17:12.837207 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:12.838495 kubelet[2799]: I1213 01:17:12.837244 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/618345751ace652cfdf7f162baa4b542-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" (UID: \"618345751ace652cfdf7f162baa4b542\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:13.408138 kubelet[2799]: I1213 01:17:13.408078 2799 apiserver.go:52] "Watching apiserver" Dec 13 01:17:13.436103 kubelet[2799]: I1213 01:17:13.436034 2799 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:17:13.525419 kubelet[2799]: I1213 01:17:13.525365 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" podStartSLOduration=1.525281551 podStartE2EDuration="1.525281551s" podCreationTimestamp="2024-12-13 01:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:13.524272683 +0000 UTC m=+1.221689794" watchObservedRunningTime="2024-12-13 01:17:13.525281551 +0000 UTC m=+1.222698648" Dec 13 01:17:13.525649 kubelet[2799]: I1213 01:17:13.525531 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" podStartSLOduration=1.525501018 podStartE2EDuration="1.525501018s" podCreationTimestamp="2024-12-13 01:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:13.512856524 +0000 UTC m=+1.210273634" watchObservedRunningTime="2024-12-13 01:17:13.525501018 +0000 UTC m=+1.222918120" Dec 13 01:17:13.541880 kubelet[2799]: I1213 01:17:13.541827 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" podStartSLOduration=1.541774041 podStartE2EDuration="1.541774041s" podCreationTimestamp="2024-12-13 01:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:13.539743319 +0000 UTC m=+1.237160430" watchObservedRunningTime="2024-12-13 01:17:13.541774041 +0000 UTC m=+1.239191208" Dec 13 01:17:13.559936 kubelet[2799]: W1213 01:17:13.559881 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:17:13.560128 kubelet[2799]: E1213 01:17:13.560015 2799 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:17.951766 sudo[1862]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:17.994685 sshd[1858]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:17.999669 systemd[1]: sshd@6-10.128.0.87:22-147.75.109.163:44596.service: Deactivated successfully. Dec 13 01:17:18.005684 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:17:18.007776 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:17:18.009224 systemd-logind[1571]: Removed session 7. Dec 13 01:17:19.071872 update_engine[1577]: I20241213 01:17:19.071764 1577 update_attempter.cc:509] Updating boot flags... Dec 13 01:17:19.145949 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2885) Dec 13 01:17:19.270217 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2881) Dec 13 01:17:19.369041 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2881) Dec 13 01:17:25.924731 kubelet[2799]: I1213 01:17:25.924691 2799 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:17:25.926066 containerd[1595]: time="2024-12-13T01:17:25.926016860Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:17:25.926918 kubelet[2799]: I1213 01:17:25.926345 2799 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:17:26.649926 kubelet[2799]: I1213 01:17:26.647985 2799 topology_manager.go:215] "Topology Admit Handler" podUID="249ddee7-1457-4599-b4bb-43391f82c252" podNamespace="kube-system" podName="kube-proxy-hhdbl" Dec 13 01:17:26.729194 kubelet[2799]: I1213 01:17:26.729109 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249ddee7-1457-4599-b4bb-43391f82c252-xtables-lock\") pod \"kube-proxy-hhdbl\" (UID: \"249ddee7-1457-4599-b4bb-43391f82c252\") " pod="kube-system/kube-proxy-hhdbl" Dec 13 01:17:26.729654 kubelet[2799]: I1213 01:17:26.729633 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/249ddee7-1457-4599-b4bb-43391f82c252-kube-proxy\") pod \"kube-proxy-hhdbl\" (UID: \"249ddee7-1457-4599-b4bb-43391f82c252\") " pod="kube-system/kube-proxy-hhdbl" Dec 13 01:17:26.729823 kubelet[2799]: I1213 01:17:26.729806 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7v6f\" (UniqueName: \"kubernetes.io/projected/249ddee7-1457-4599-b4bb-43391f82c252-kube-api-access-g7v6f\") pod \"kube-proxy-hhdbl\" (UID: \"249ddee7-1457-4599-b4bb-43391f82c252\") " pod="kube-system/kube-proxy-hhdbl" Dec 13 01:17:26.730044 kubelet[2799]: I1213 01:17:26.730027 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249ddee7-1457-4599-b4bb-43391f82c252-lib-modules\") pod \"kube-proxy-hhdbl\" (UID: \"249ddee7-1457-4599-b4bb-43391f82c252\") " pod="kube-system/kube-proxy-hhdbl" Dec 13 01:17:26.840878 kubelet[2799]: E1213 01:17:26.840809 2799 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:17:26.840878 kubelet[2799]: E1213 01:17:26.840866 2799 projected.go:200] Error preparing data for projected volume kube-api-access-g7v6f for pod kube-system/kube-proxy-hhdbl: configmap "kube-root-ca.crt" not found Dec 13 01:17:26.841146 kubelet[2799]: E1213 01:17:26.840979 2799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/249ddee7-1457-4599-b4bb-43391f82c252-kube-api-access-g7v6f podName:249ddee7-1457-4599-b4bb-43391f82c252 nodeName:}" failed. No retries permitted until 2024-12-13 01:17:27.340949885 +0000 UTC m=+15.038366987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g7v6f" (UniqueName: "kubernetes.io/projected/249ddee7-1457-4599-b4bb-43391f82c252-kube-api-access-g7v6f") pod "kube-proxy-hhdbl" (UID: "249ddee7-1457-4599-b4bb-43391f82c252") : configmap "kube-root-ca.crt" not found Dec 13 01:17:27.060174 kubelet[2799]: I1213 01:17:27.060108 2799 topology_manager.go:215] "Topology Admit Handler" podUID="1f849596-bd37-492d-9522-addfc0c8404f" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-8ld72" Dec 13 01:17:27.133336 kubelet[2799]: I1213 01:17:27.133235 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f849596-bd37-492d-9522-addfc0c8404f-var-lib-calico\") pod \"tigera-operator-c7ccbd65-8ld72\" (UID: \"1f849596-bd37-492d-9522-addfc0c8404f\") " pod="tigera-operator/tigera-operator-c7ccbd65-8ld72" Dec 13 01:17:27.133336 kubelet[2799]: I1213 01:17:27.133299 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq829\" (UniqueName: \"kubernetes.io/projected/1f849596-bd37-492d-9522-addfc0c8404f-kube-api-access-rq829\") pod \"tigera-operator-c7ccbd65-8ld72\" (UID: \"1f849596-bd37-492d-9522-addfc0c8404f\") " pod="tigera-operator/tigera-operator-c7ccbd65-8ld72" Dec 13 01:17:27.372771 containerd[1595]: time="2024-12-13T01:17:27.372250181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-8ld72,Uid:1f849596-bd37-492d-9522-addfc0c8404f,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:17:27.416751 containerd[1595]: time="2024-12-13T01:17:27.416588470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:27.416751 containerd[1595]: time="2024-12-13T01:17:27.416658267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:27.416751 containerd[1595]: time="2024-12-13T01:17:27.416683766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.417234 containerd[1595]: time="2024-12-13T01:17:27.416971757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.505599 containerd[1595]: time="2024-12-13T01:17:27.505534162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-8ld72,Uid:1f849596-bd37-492d-9522-addfc0c8404f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6930377c003c1079cc0f05f7b47cd96ec74b6ea539a5862cad11edebe1a321c8\"" Dec 13 01:17:27.508822 containerd[1595]: time="2024-12-13T01:17:27.508678691Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:17:27.555123 containerd[1595]: time="2024-12-13T01:17:27.555060591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhdbl,Uid:249ddee7-1457-4599-b4bb-43391f82c252,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:27.589946 containerd[1595]: time="2024-12-13T01:17:27.589631594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:27.590552 containerd[1595]: time="2024-12-13T01:17:27.590475407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:27.590726 containerd[1595]: time="2024-12-13T01:17:27.590545896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.590794 containerd[1595]: time="2024-12-13T01:17:27.590681733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.639172 containerd[1595]: time="2024-12-13T01:17:27.638962890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhdbl,Uid:249ddee7-1457-4599-b4bb-43391f82c252,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae3694a86e14a7028cf92801dd553498f8dcfe4bd434f8f3f1424c33cbac591\"" Dec 13 01:17:27.642780 containerd[1595]: time="2024-12-13T01:17:27.642714628Z" level=info msg="CreateContainer within sandbox \"6ae3694a86e14a7028cf92801dd553498f8dcfe4bd434f8f3f1424c33cbac591\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:17:27.660960 containerd[1595]: time="2024-12-13T01:17:27.660866025Z" level=info msg="CreateContainer within sandbox \"6ae3694a86e14a7028cf92801dd553498f8dcfe4bd434f8f3f1424c33cbac591\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"898d743dbd965bb8c7f12e3fa794866b48b28ccb6996e21b991e6d2a3903d4f1\"" Dec 13 01:17:27.661579 containerd[1595]: time="2024-12-13T01:17:27.661532816Z" level=info msg="StartContainer for \"898d743dbd965bb8c7f12e3fa794866b48b28ccb6996e21b991e6d2a3903d4f1\"" Dec 13 01:17:27.737454 containerd[1595]: time="2024-12-13T01:17:27.737400999Z" level=info msg="StartContainer for \"898d743dbd965bb8c7f12e3fa794866b48b28ccb6996e21b991e6d2a3903d4f1\" returns successfully" Dec 13 01:17:31.222166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852713195.mount: Deactivated successfully. Dec 13 01:17:31.916820 containerd[1595]: time="2024-12-13T01:17:31.916753066Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:31.918129 containerd[1595]: time="2024-12-13T01:17:31.918053614Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764293" Dec 13 01:17:31.919729 containerd[1595]: time="2024-12-13T01:17:31.919645998Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:31.922686 containerd[1595]: time="2024-12-13T01:17:31.922620415Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:31.923837 containerd[1595]: time="2024-12-13T01:17:31.923658072Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.414928425s" Dec 13 01:17:31.923837 containerd[1595]: time="2024-12-13T01:17:31.923706213Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:17:31.926832 containerd[1595]: time="2024-12-13T01:17:31.926792493Z" level=info msg="CreateContainer within sandbox \"6930377c003c1079cc0f05f7b47cd96ec74b6ea539a5862cad11edebe1a321c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:17:31.944200 containerd[1595]: time="2024-12-13T01:17:31.944084365Z" level=info msg="CreateContainer within sandbox \"6930377c003c1079cc0f05f7b47cd96ec74b6ea539a5862cad11edebe1a321c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e61b9ca296ba58cfe9b6e71c0e8d663c88ae8435729025138bc0b94ee878a0be\"" Dec 13 01:17:31.947889 containerd[1595]: time="2024-12-13T01:17:31.944926511Z" level=info msg="StartContainer for \"e61b9ca296ba58cfe9b6e71c0e8d663c88ae8435729025138bc0b94ee878a0be\"" Dec 13 01:17:31.946768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114157952.mount: Deactivated successfully. Dec 13 01:17:32.016668 containerd[1595]: time="2024-12-13T01:17:32.016597143Z" level=info msg="StartContainer for \"e61b9ca296ba58cfe9b6e71c0e8d663c88ae8435729025138bc0b94ee878a0be\" returns successfully" Dec 13 01:17:32.605116 kubelet[2799]: I1213 01:17:32.604823 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hhdbl" podStartSLOduration=6.6047705180000005 podStartE2EDuration="6.604770518s" podCreationTimestamp="2024-12-13 01:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:28.597294335 +0000 UTC m=+16.294711446" watchObservedRunningTime="2024-12-13 01:17:32.604770518 +0000 UTC m=+20.302187629" Dec 13 01:17:35.311293 kubelet[2799]: I1213 01:17:35.311253 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-8ld72" podStartSLOduration=4.894702523 podStartE2EDuration="9.311195726s" podCreationTimestamp="2024-12-13 01:17:26 +0000 UTC" firstStartedPulling="2024-12-13 01:17:27.507823558 +0000 UTC m=+15.205240658" lastFinishedPulling="2024-12-13 01:17:31.924316762 +0000 UTC m=+19.621733861" observedRunningTime="2024-12-13 01:17:32.605095763 +0000 UTC m=+20.302512875" watchObservedRunningTime="2024-12-13 01:17:35.311195726 +0000 UTC m=+23.008612835" Dec 13 01:17:35.312417 kubelet[2799]: I1213 01:17:35.311538 2799 topology_manager.go:215] "Topology Admit Handler" podUID="7daedd55-7c05-4bfd-aa38-bba2f2230fe8" podNamespace="calico-system" podName="calico-typha-798c8cd5c7-5qx8l" Dec 13 01:17:35.386347 kubelet[2799]: I1213 01:17:35.386302 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntfx\" (UniqueName: \"kubernetes.io/projected/7daedd55-7c05-4bfd-aa38-bba2f2230fe8-kube-api-access-xntfx\") pod \"calico-typha-798c8cd5c7-5qx8l\" (UID: \"7daedd55-7c05-4bfd-aa38-bba2f2230fe8\") " pod="calico-system/calico-typha-798c8cd5c7-5qx8l" Dec 13 01:17:35.386535 kubelet[2799]: I1213 01:17:35.386366 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7daedd55-7c05-4bfd-aa38-bba2f2230fe8-tigera-ca-bundle\") pod \"calico-typha-798c8cd5c7-5qx8l\" (UID: \"7daedd55-7c05-4bfd-aa38-bba2f2230fe8\") " pod="calico-system/calico-typha-798c8cd5c7-5qx8l" Dec 13 01:17:35.386535 kubelet[2799]: I1213 01:17:35.386404 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7daedd55-7c05-4bfd-aa38-bba2f2230fe8-typha-certs\") pod \"calico-typha-798c8cd5c7-5qx8l\" (UID: \"7daedd55-7c05-4bfd-aa38-bba2f2230fe8\") " pod="calico-system/calico-typha-798c8cd5c7-5qx8l" Dec 13 01:17:35.541585 kubelet[2799]: I1213 01:17:35.540422 2799 topology_manager.go:215] "Topology Admit Handler" podUID="8e2bc208-dcf6-4a15-90a7-956a4c19244d" podNamespace="calico-system" podName="calico-node-c2ts7" Dec 13 01:17:35.588530 kubelet[2799]: I1213 01:17:35.588391 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-xtables-lock\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.588530 kubelet[2799]: I1213 01:17:35.588493 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-lib-modules\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.589247 kubelet[2799]: I1213 01:17:35.589206 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-cni-bin-dir\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.589473 kubelet[2799]: I1213 01:17:35.589362 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmdf\" (UniqueName: \"kubernetes.io/projected/8e2bc208-dcf6-4a15-90a7-956a4c19244d-kube-api-access-hqmdf\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.589572 kubelet[2799]: I1213 01:17:35.589475 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8e2bc208-dcf6-4a15-90a7-956a4c19244d-node-certs\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.589572 kubelet[2799]: I1213 01:17:35.589560 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-cni-net-dir\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.589756 kubelet[2799]: I1213 01:17:35.589640 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-policysync\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.589756 kubelet[2799]: I1213 01:17:35.589714 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-cni-log-dir\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.589970 kubelet[2799]: I1213 01:17:35.589951 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2bc208-dcf6-4a15-90a7-956a4c19244d-tigera-ca-bundle\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.590405 kubelet[2799]: I1213 01:17:35.590378 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-var-run-calico\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.590405 kubelet[2799]: I1213 01:17:35.590491 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-var-lib-calico\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.590405 kubelet[2799]: I1213 01:17:35.590550 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8e2bc208-dcf6-4a15-90a7-956a4c19244d-flexvol-driver-host\") pod \"calico-node-c2ts7\" (UID: \"8e2bc208-dcf6-4a15-90a7-956a4c19244d\") " pod="calico-system/calico-node-c2ts7" Dec 13 01:17:35.621126 containerd[1595]: time="2024-12-13T01:17:35.621066405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-798c8cd5c7-5qx8l,Uid:7daedd55-7c05-4bfd-aa38-bba2f2230fe8,Namespace:calico-system,Attempt:0,}" Dec 13 01:17:35.682928 kubelet[2799]: I1213 01:17:35.682715 2799 topology_manager.go:215] "Topology Admit Handler" podUID="66c052f9-688a-4b76-896e-248573f70c35" podNamespace="calico-system" podName="csi-node-driver-876cd" Dec 13 01:17:35.683646 kubelet[2799]: E1213 01:17:35.683118 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-876cd" podUID="66c052f9-688a-4b76-896e-248573f70c35" Dec 13 01:17:35.699461 containerd[1595]: time="2024-12-13T01:17:35.698770799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:35.699461 containerd[1595]: time="2024-12-13T01:17:35.698839640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:35.700469 containerd[1595]: time="2024-12-13T01:17:35.698887386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:35.704592 kubelet[2799]: E1213 01:17:35.703342 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.704592 kubelet[2799]: W1213 01:17:35.703366 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.704592 kubelet[2799]: E1213 01:17:35.703395 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.707333 containerd[1595]: time="2024-12-13T01:17:35.704557986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:35.742130 kubelet[2799]: E1213 01:17:35.741415 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.742130 kubelet[2799]: W1213 01:17:35.741442 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.742130 kubelet[2799]: E1213 01:17:35.741472 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.770879 kubelet[2799]: E1213 01:17:35.770122 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.770879 kubelet[2799]: W1213 01:17:35.770149 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.770879 kubelet[2799]: E1213 01:17:35.770298 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.783269 kubelet[2799]: E1213 01:17:35.783162 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.783269 kubelet[2799]: W1213 01:17:35.783202 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.783269 kubelet[2799]: E1213 01:17:35.783233 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.786315 kubelet[2799]: E1213 01:17:35.785137 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.786315 kubelet[2799]: W1213 01:17:35.785159 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.786315 kubelet[2799]: E1213 01:17:35.785202 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.787943 kubelet[2799]: E1213 01:17:35.786734 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.787943 kubelet[2799]: W1213 01:17:35.786764 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.787943 kubelet[2799]: E1213 01:17:35.786784 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.790920 kubelet[2799]: E1213 01:17:35.789715 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.791230 kubelet[2799]: W1213 01:17:35.791084 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.791230 kubelet[2799]: E1213 01:17:35.791116 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.792840 kubelet[2799]: E1213 01:17:35.792695 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.792840 kubelet[2799]: W1213 01:17:35.792713 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.792840 kubelet[2799]: E1213 01:17:35.792735 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.794934 kubelet[2799]: E1213 01:17:35.794368 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.794934 kubelet[2799]: W1213 01:17:35.794385 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.794934 kubelet[2799]: E1213 01:17:35.794406 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.797286 kubelet[2799]: E1213 01:17:35.797044 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.797286 kubelet[2799]: W1213 01:17:35.797063 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.797286 kubelet[2799]: E1213 01:17:35.797152 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.800111 kubelet[2799]: E1213 01:17:35.797713 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.800111 kubelet[2799]: W1213 01:17:35.797741 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.800111 kubelet[2799]: E1213 01:17:35.797761 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.800565 kubelet[2799]: E1213 01:17:35.800412 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.800565 kubelet[2799]: W1213 01:17:35.800427 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.800565 kubelet[2799]: E1213 01:17:35.800452 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.801113 kubelet[2799]: E1213 01:17:35.800988 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.801113 kubelet[2799]: W1213 01:17:35.801006 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.801113 kubelet[2799]: E1213 01:17:35.801045 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.804014 kubelet[2799]: E1213 01:17:35.802213 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.804014 kubelet[2799]: W1213 01:17:35.802230 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.804014 kubelet[2799]: E1213 01:17:35.802250 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.804591 kubelet[2799]: E1213 01:17:35.804420 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.804591 kubelet[2799]: W1213 01:17:35.804438 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.804591 kubelet[2799]: E1213 01:17:35.804460 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.806249 kubelet[2799]: E1213 01:17:35.805460 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.806249 kubelet[2799]: W1213 01:17:35.805480 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.806249 kubelet[2799]: E1213 01:17:35.805500 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.807925 kubelet[2799]: E1213 01:17:35.806876 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.807925 kubelet[2799]: W1213 01:17:35.806936 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.807925 kubelet[2799]: E1213 01:17:35.806960 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.809056 kubelet[2799]: E1213 01:17:35.808257 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.809056 kubelet[2799]: W1213 01:17:35.808274 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.809056 kubelet[2799]: E1213 01:17:35.808303 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.811198 kubelet[2799]: E1213 01:17:35.811032 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.811198 kubelet[2799]: W1213 01:17:35.811051 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.811198 kubelet[2799]: E1213 01:17:35.811074 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.813835 kubelet[2799]: E1213 01:17:35.813024 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.813835 kubelet[2799]: W1213 01:17:35.813054 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.813835 kubelet[2799]: E1213 01:17:35.813075 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.816827 kubelet[2799]: E1213 01:17:35.815856 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.816827 kubelet[2799]: W1213 01:17:35.815873 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.817167 kubelet[2799]: E1213 01:17:35.817020 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.820870 kubelet[2799]: E1213 01:17:35.819845 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.820870 kubelet[2799]: W1213 01:17:35.819990 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.820870 kubelet[2799]: E1213 01:17:35.820015 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.822680 kubelet[2799]: E1213 01:17:35.822589 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.822979 kubelet[2799]: W1213 01:17:35.822820 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.822979 kubelet[2799]: E1213 01:17:35.822848 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.824857 kubelet[2799]: E1213 01:17:35.824562 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.824857 kubelet[2799]: W1213 01:17:35.824579 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.825335 kubelet[2799]: E1213 01:17:35.824925 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.825335 kubelet[2799]: I1213 01:17:35.824976 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/66c052f9-688a-4b76-896e-248573f70c35-varrun\") pod \"csi-node-driver-876cd\" (UID: \"66c052f9-688a-4b76-896e-248573f70c35\") " pod="calico-system/csi-node-driver-876cd" Dec 13 01:17:35.827708 kubelet[2799]: E1213 01:17:35.827651 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.827922 kubelet[2799]: W1213 01:17:35.827833 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.827922 kubelet[2799]: E1213 01:17:35.827864 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.828235 kubelet[2799]: I1213 01:17:35.828108 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58cl\" (UniqueName: \"kubernetes.io/projected/66c052f9-688a-4b76-896e-248573f70c35-kube-api-access-k58cl\") pod \"csi-node-driver-876cd\" (UID: \"66c052f9-688a-4b76-896e-248573f70c35\") " pod="calico-system/csi-node-driver-876cd" Dec 13 01:17:35.829570 kubelet[2799]: E1213 01:17:35.829398 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.829570 kubelet[2799]: W1213 01:17:35.829435 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.829570 kubelet[2799]: E1213 01:17:35.829468 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.831004 kubelet[2799]: I1213 01:17:35.830247 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66c052f9-688a-4b76-896e-248573f70c35-socket-dir\") pod \"csi-node-driver-876cd\" (UID: \"66c052f9-688a-4b76-896e-248573f70c35\") " pod="calico-system/csi-node-driver-876cd" Dec 13 01:17:35.832957 kubelet[2799]: E1213 01:17:35.832380 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.832957 kubelet[2799]: W1213 01:17:35.832605 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.832957 kubelet[2799]: E1213 01:17:35.832690 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.833991 kubelet[2799]: E1213 01:17:35.833974 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.834295 kubelet[2799]: W1213 01:17:35.834118 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.834543 kubelet[2799]: E1213 01:17:35.834435 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.835198 kubelet[2799]: E1213 01:17:35.835140 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.835397 kubelet[2799]: W1213 01:17:35.835284 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.836166 kubelet[2799]: E1213 01:17:35.836077 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.836920 kubelet[2799]: E1213 01:17:35.836854 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.837280 kubelet[2799]: W1213 01:17:35.837210 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.838344 kubelet[2799]: E1213 01:17:35.838031 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.838997 kubelet[2799]: I1213 01:17:35.838639 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66c052f9-688a-4b76-896e-248573f70c35-registration-dir\") pod \"csi-node-driver-876cd\" (UID: \"66c052f9-688a-4b76-896e-248573f70c35\") " pod="calico-system/csi-node-driver-876cd" Dec 13 01:17:35.842154 kubelet[2799]: E1213 01:17:35.842041 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.842154 kubelet[2799]: W1213 01:17:35.842082 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.842596 kubelet[2799]: E1213 01:17:35.842412 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.843143 kubelet[2799]: E1213 01:17:35.842980 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.843143 kubelet[2799]: W1213 01:17:35.842997 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.843143 kubelet[2799]: E1213 01:17:35.843016 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.843675 kubelet[2799]: E1213 01:17:35.843658 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.844128 kubelet[2799]: W1213 01:17:35.844094 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.844529 kubelet[2799]: E1213 01:17:35.844380 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.845232 kubelet[2799]: I1213 01:17:35.845189 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66c052f9-688a-4b76-896e-248573f70c35-kubelet-dir\") pod \"csi-node-driver-876cd\" (UID: \"66c052f9-688a-4b76-896e-248573f70c35\") " pod="calico-system/csi-node-driver-876cd" Dec 13 01:17:35.847357 kubelet[2799]: E1213 01:17:35.846010 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.847357 kubelet[2799]: W1213 01:17:35.846028 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.847357 kubelet[2799]: E1213 01:17:35.846075 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.850547 kubelet[2799]: E1213 01:17:35.849075 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.850547 kubelet[2799]: W1213 01:17:35.849092 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.850547 kubelet[2799]: E1213 01:17:35.849183 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.853421 kubelet[2799]: E1213 01:17:35.852393 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.853421 kubelet[2799]: W1213 01:17:35.852412 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.853421 kubelet[2799]: E1213 01:17:35.852439 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.856045 kubelet[2799]: E1213 01:17:35.855660 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.856045 kubelet[2799]: W1213 01:17:35.855678 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.856045 kubelet[2799]: E1213 01:17:35.855699 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.859173 kubelet[2799]: E1213 01:17:35.858881 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.859173 kubelet[2799]: W1213 01:17:35.858923 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.859173 kubelet[2799]: E1213 01:17:35.858945 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.861230 containerd[1595]: time="2024-12-13T01:17:35.860667118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c2ts7,Uid:8e2bc208-dcf6-4a15-90a7-956a4c19244d,Namespace:calico-system,Attempt:0,}" Dec 13 01:17:35.926059 containerd[1595]: time="2024-12-13T01:17:35.925257763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:35.926059 containerd[1595]: time="2024-12-13T01:17:35.925347790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:35.926059 containerd[1595]: time="2024-12-13T01:17:35.925378449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:35.930953 containerd[1595]: time="2024-12-13T01:17:35.930380119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:35.957747 kubelet[2799]: E1213 01:17:35.957701 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.957747 kubelet[2799]: W1213 01:17:35.957733 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.957983 kubelet[2799]: E1213 01:17:35.957764 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.959041 kubelet[2799]: E1213 01:17:35.958197 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.959041 kubelet[2799]: W1213 01:17:35.958221 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.959041 kubelet[2799]: E1213 01:17:35.958374 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.959041 kubelet[2799]: E1213 01:17:35.959182 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.959041 kubelet[2799]: W1213 01:17:35.959199 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.961887 kubelet[2799]: E1213 01:17:35.959221 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.961887 kubelet[2799]: E1213 01:17:35.960605 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.961887 kubelet[2799]: W1213 01:17:35.960620 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.961887 kubelet[2799]: E1213 01:17:35.961513 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.961887 kubelet[2799]: E1213 01:17:35.961552 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.961887 kubelet[2799]: W1213 01:17:35.961528 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.961887 kubelet[2799]: E1213 01:17:35.961672 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.966505 kubelet[2799]: E1213 01:17:35.966426 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.966505 kubelet[2799]: W1213 01:17:35.966447 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.968028 kubelet[2799]: E1213 01:17:35.967868 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.970957 kubelet[2799]: E1213 01:17:35.969991 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.970957 kubelet[2799]: W1213 01:17:35.970039 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.970957 kubelet[2799]: E1213 01:17:35.970377 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.970957 kubelet[2799]: W1213 01:17:35.970389 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.972046 kubelet[2799]: E1213 01:17:35.972018 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.972046 kubelet[2799]: W1213 01:17:35.972046 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.974208 kubelet[2799]: E1213 01:17:35.974170 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.975062 kubelet[2799]: W1213 01:17:35.975036 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.977425 kubelet[2799]: E1213 01:17:35.976738 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.977917 kubelet[2799]: E1213 01:17:35.977489 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.977917 kubelet[2799]: W1213 01:17:35.977506 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.977917 kubelet[2799]: E1213 01:17:35.977529 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.980537 kubelet[2799]: E1213 01:17:35.980516 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.980537 kubelet[2799]: W1213 01:17:35.980537 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.980682 kubelet[2799]: E1213 01:17:35.980570 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.987007 kubelet[2799]: E1213 01:17:35.974285 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.987876 kubelet[2799]: E1213 01:17:35.987741 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.987876 kubelet[2799]: W1213 01:17:35.987770 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.987876 kubelet[2799]: E1213 01:17:35.987792 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.987876 kubelet[2799]: E1213 01:17:35.974310 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.988856 kubelet[2799]: E1213 01:17:35.974321 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.991193 kubelet[2799]: E1213 01:17:35.990846 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.991193 kubelet[2799]: W1213 01:17:35.990865 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.992506 kubelet[2799]: E1213 01:17:35.991490 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.996967 kubelet[2799]: E1213 01:17:35.996701 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.996967 kubelet[2799]: W1213 01:17:35.996720 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.996967 kubelet[2799]: E1213 01:17:35.996741 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.997952 kubelet[2799]: E1213 01:17:35.997928 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.997952 kubelet[2799]: W1213 01:17:35.997951 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.998230 kubelet[2799]: E1213 01:17:35.997973 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.998312 kubelet[2799]: E1213 01:17:35.998289 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.998312 kubelet[2799]: W1213 01:17:35.998303 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.998424 kubelet[2799]: E1213 01:17:35.998324 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.999351 kubelet[2799]: E1213 01:17:35.998722 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.999351 kubelet[2799]: W1213 01:17:35.998740 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.999351 kubelet[2799]: E1213 01:17:35.998760 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.999351 kubelet[2799]: E1213 01:17:35.999094 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.999351 kubelet[2799]: W1213 01:17:35.999108 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.999351 kubelet[2799]: E1213 01:17:35.999129 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.999751 kubelet[2799]: E1213 01:17:35.999393 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.999751 kubelet[2799]: W1213 01:17:35.999406 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.999751 kubelet[2799]: E1213 01:17:35.999427 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:35.999922 kubelet[2799]: E1213 01:17:35.999759 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:35.999922 kubelet[2799]: W1213 01:17:35.999773 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:35.999922 kubelet[2799]: E1213 01:17:35.999794 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:36.000627 kubelet[2799]: E1213 01:17:36.000553 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:36.000627 kubelet[2799]: W1213 01:17:36.000574 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:36.001462 kubelet[2799]: E1213 01:17:36.000724 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:36.001871 kubelet[2799]: E1213 01:17:36.001853 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:36.002011 kubelet[2799]: W1213 01:17:36.001995 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:36.002209 kubelet[2799]: E1213 01:17:36.002167 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:36.003815 kubelet[2799]: E1213 01:17:36.003768 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:36.004012 kubelet[2799]: W1213 01:17:36.003786 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:36.004012 kubelet[2799]: E1213 01:17:36.003939 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:36.005770 kubelet[2799]: E1213 01:17:36.005752 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:36.006338 kubelet[2799]: W1213 01:17:36.006037 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:36.006338 kubelet[2799]: E1213 01:17:36.006256 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:36.015706 kubelet[2799]: E1213 01:17:36.015603 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:36.015706 kubelet[2799]: W1213 01:17:36.015622 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:36.015706 kubelet[2799]: E1213 01:17:36.015669 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:36.068302 containerd[1595]: time="2024-12-13T01:17:36.068118936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-798c8cd5c7-5qx8l,Uid:7daedd55-7c05-4bfd-aa38-bba2f2230fe8,Namespace:calico-system,Attempt:0,} returns sandbox id \"5334260b305f1968d42ef0386cf542faed1a7596b228fa2bfa733711f32f20e5\"" Dec 13 01:17:36.073199 containerd[1595]: time="2024-12-13T01:17:36.072981798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:17:36.079920 containerd[1595]: time="2024-12-13T01:17:36.079716192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c2ts7,Uid:8e2bc208-dcf6-4a15-90a7-956a4c19244d,Namespace:calico-system,Attempt:0,} returns sandbox id \"98bcc335c23d7ee656ccedb0f593b8dfbbba7e19f77bd4cad18a3b8989f25402\"" Dec 13 01:17:37.028001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508078996.mount: Deactivated successfully. Dec 13 01:17:37.501087 kubelet[2799]: E1213 01:17:37.500597 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-876cd" podUID="66c052f9-688a-4b76-896e-248573f70c35" Dec 13 01:17:37.881605 containerd[1595]: time="2024-12-13T01:17:37.881462682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:37.883292 containerd[1595]: time="2024-12-13T01:17:37.883221693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:17:37.885031 containerd[1595]: time="2024-12-13T01:17:37.884958536Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:37.888807 containerd[1595]: time="2024-12-13T01:17:37.888610834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:37.891221 containerd[1595]: time="2024-12-13T01:17:37.890962301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.817841811s" Dec 13 01:17:37.891221 containerd[1595]: time="2024-12-13T01:17:37.891020109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:17:37.892290 containerd[1595]: time="2024-12-13T01:17:37.892233600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:17:37.919658 containerd[1595]: time="2024-12-13T01:17:37.919612920Z" level=info msg="CreateContainer within sandbox \"5334260b305f1968d42ef0386cf542faed1a7596b228fa2bfa733711f32f20e5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:17:37.939697 containerd[1595]: time="2024-12-13T01:17:37.939643139Z" level=info msg="CreateContainer within sandbox \"5334260b305f1968d42ef0386cf542faed1a7596b228fa2bfa733711f32f20e5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1dd7e913fb9c258674df66a2c7476f9c4972c976a71a1b54a562884423a687b7\"" Dec 13 01:17:37.940501 containerd[1595]: time="2024-12-13T01:17:37.940443504Z" level=info msg="StartContainer for \"1dd7e913fb9c258674df66a2c7476f9c4972c976a71a1b54a562884423a687b7\"" Dec 13 01:17:38.040920 containerd[1595]: time="2024-12-13T01:17:38.040839360Z" level=info msg="StartContainer for \"1dd7e913fb9c258674df66a2c7476f9c4972c976a71a1b54a562884423a687b7\" returns successfully" Dec 13 01:17:38.627348 kubelet[2799]: I1213 01:17:38.627307 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-798c8cd5c7-5qx8l" podStartSLOduration=1.8069411359999998 podStartE2EDuration="3.627249025s" podCreationTimestamp="2024-12-13 01:17:35 +0000 UTC" firstStartedPulling="2024-12-13 01:17:36.071072745 +0000 UTC m=+23.768489837" lastFinishedPulling="2024-12-13 01:17:37.891380632 +0000 UTC m=+25.588797726" observedRunningTime="2024-12-13 01:17:38.625217865 +0000 UTC m=+26.322634975" watchObservedRunningTime="2024-12-13 01:17:38.627249025 +0000 UTC m=+26.324666138" Dec 13 01:17:38.643472 kubelet[2799]: E1213 01:17:38.643429 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.643472 kubelet[2799]: W1213 01:17:38.643458 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.643720 kubelet[2799]: E1213 01:17:38.643521 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.645881 kubelet[2799]: E1213 01:17:38.643970 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.645881 kubelet[2799]: W1213 01:17:38.643987 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.645881 kubelet[2799]: E1213 01:17:38.644028 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.645881 kubelet[2799]: E1213 01:17:38.644494 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.645881 kubelet[2799]: W1213 01:17:38.644509 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.645881 kubelet[2799]: E1213 01:17:38.644530 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.645881 kubelet[2799]: E1213 01:17:38.644929 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.645881 kubelet[2799]: W1213 01:17:38.644942 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.645881 kubelet[2799]: E1213 01:17:38.644961 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.645881 kubelet[2799]: E1213 01:17:38.645380 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.646532 kubelet[2799]: W1213 01:17:38.645392 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.646532 kubelet[2799]: E1213 01:17:38.645437 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.646532 kubelet[2799]: E1213 01:17:38.645747 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.646532 kubelet[2799]: W1213 01:17:38.645758 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.646532 kubelet[2799]: E1213 01:17:38.645776 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.646532 kubelet[2799]: E1213 01:17:38.646142 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.646532 kubelet[2799]: W1213 01:17:38.646155 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.646532 kubelet[2799]: E1213 01:17:38.646177 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.646532 kubelet[2799]: E1213 01:17:38.646479 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.646532 kubelet[2799]: W1213 01:17:38.646492 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.647066 kubelet[2799]: E1213 01:17:38.646510 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.647066 kubelet[2799]: E1213 01:17:38.646873 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.647066 kubelet[2799]: W1213 01:17:38.646887 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.647066 kubelet[2799]: E1213 01:17:38.646952 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.648912 kubelet[2799]: E1213 01:17:38.647350 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.648912 kubelet[2799]: W1213 01:17:38.647368 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.648912 kubelet[2799]: E1213 01:17:38.647387 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.648912 kubelet[2799]: E1213 01:17:38.647679 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.648912 kubelet[2799]: W1213 01:17:38.647690 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.648912 kubelet[2799]: E1213 01:17:38.647708 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.648912 kubelet[2799]: E1213 01:17:38.647993 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.648912 kubelet[2799]: W1213 01:17:38.648005 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.648912 kubelet[2799]: E1213 01:17:38.648021 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.648912 kubelet[2799]: E1213 01:17:38.648296 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.649450 kubelet[2799]: W1213 01:17:38.648306 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.649450 kubelet[2799]: E1213 01:17:38.648324 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.649450 kubelet[2799]: E1213 01:17:38.648625 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.649450 kubelet[2799]: W1213 01:17:38.648635 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.649450 kubelet[2799]: E1213 01:17:38.648652 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.649450 kubelet[2799]: E1213 01:17:38.648945 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.649450 kubelet[2799]: W1213 01:17:38.648955 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.649450 kubelet[2799]: E1213 01:17:38.648973 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.709917 kubelet[2799]: E1213 01:17:38.709863 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.709917 kubelet[2799]: W1213 01:17:38.709917 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.710362 kubelet[2799]: E1213 01:17:38.709963 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.710774 kubelet[2799]: E1213 01:17:38.710629 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.710774 kubelet[2799]: W1213 01:17:38.710647 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.710774 kubelet[2799]: E1213 01:17:38.710679 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.711677 kubelet[2799]: E1213 01:17:38.711483 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.711677 kubelet[2799]: W1213 01:17:38.711502 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.711677 kubelet[2799]: E1213 01:17:38.711529 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.712505 kubelet[2799]: E1213 01:17:38.712261 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.712505 kubelet[2799]: W1213 01:17:38.712280 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.712505 kubelet[2799]: E1213 01:17:38.712310 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.713203 kubelet[2799]: E1213 01:17:38.712970 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.713203 kubelet[2799]: W1213 01:17:38.712987 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.713203 kubelet[2799]: E1213 01:17:38.713057 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.713996 kubelet[2799]: E1213 01:17:38.713658 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.713996 kubelet[2799]: W1213 01:17:38.713693 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.713996 kubelet[2799]: E1213 01:17:38.713775 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.714534 kubelet[2799]: E1213 01:17:38.714501 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.714783 kubelet[2799]: W1213 01:17:38.714620 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.716021 kubelet[2799]: E1213 01:17:38.715358 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.716021 kubelet[2799]: E1213 01:17:38.715654 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.716021 kubelet[2799]: W1213 01:17:38.715667 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.716021 kubelet[2799]: E1213 01:17:38.715885 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.716450 kubelet[2799]: E1213 01:17:38.716368 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.716450 kubelet[2799]: W1213 01:17:38.716425 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.716972 kubelet[2799]: E1213 01:17:38.716706 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.717342 kubelet[2799]: E1213 01:17:38.717316 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.717619 kubelet[2799]: W1213 01:17:38.717491 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.718934 kubelet[2799]: E1213 01:17:38.718348 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.718934 kubelet[2799]: W1213 01:17:38.718365 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.718934 kubelet[2799]: E1213 01:17:38.717888 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.719507 kubelet[2799]: E1213 01:17:38.719337 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.719507 kubelet[2799]: W1213 01:17:38.719354 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.719507 kubelet[2799]: E1213 01:17:38.719375 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.719990 kubelet[2799]: E1213 01:17:38.719927 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.719990 kubelet[2799]: W1213 01:17:38.719945 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.719990 kubelet[2799]: E1213 01:17:38.719965 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.720622 kubelet[2799]: E1213 01:17:38.720230 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.721404 kubelet[2799]: E1213 01:17:38.721378 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.721546 kubelet[2799]: W1213 01:17:38.721530 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.721716 kubelet[2799]: E1213 01:17:38.721702 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.722297 kubelet[2799]: E1213 01:17:38.722242 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.722297 kubelet[2799]: W1213 01:17:38.722259 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.722629 kubelet[2799]: E1213 01:17:38.722460 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.723942 kubelet[2799]: E1213 01:17:38.723444 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.723942 kubelet[2799]: W1213 01:17:38.723461 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.723942 kubelet[2799]: E1213 01:17:38.723488 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.724955 kubelet[2799]: E1213 01:17:38.724880 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.725226 kubelet[2799]: W1213 01:17:38.725076 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.725706 kubelet[2799]: E1213 01:17:38.725665 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.726341 kubelet[2799]: E1213 01:17:38.725915 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:17:38.726341 kubelet[2799]: W1213 01:17:38.725933 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:17:38.726534 kubelet[2799]: E1213 01:17:38.726514 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:17:38.817237 containerd[1595]: time="2024-12-13T01:17:38.817179933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:38.818198 containerd[1595]: time="2024-12-13T01:17:38.818126230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:17:38.820920 containerd[1595]: time="2024-12-13T01:17:38.819423288Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:38.822929 containerd[1595]: time="2024-12-13T01:17:38.822874612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:38.824296 containerd[1595]: time="2024-12-13T01:17:38.824248370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 931.971007ms" Dec 13 01:17:38.824296 containerd[1595]: time="2024-12-13T01:17:38.824301913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:17:38.832240 containerd[1595]: time="2024-12-13T01:17:38.832191833Z" level=info msg="CreateContainer within sandbox \"98bcc335c23d7ee656ccedb0f593b8dfbbba7e19f77bd4cad18a3b8989f25402\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:17:38.850978 containerd[1595]: time="2024-12-13T01:17:38.850928713Z" level=info msg="CreateContainer within sandbox \"98bcc335c23d7ee656ccedb0f593b8dfbbba7e19f77bd4cad18a3b8989f25402\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d91f1558bc5de867b0e5d3d3e7078df778a6372951b26b6ac076db2d6e0f243\"" Dec 13 01:17:38.851585 containerd[1595]: time="2024-12-13T01:17:38.851469371Z" level=info msg="StartContainer for \"5d91f1558bc5de867b0e5d3d3e7078df778a6372951b26b6ac076db2d6e0f243\"" Dec 13 01:17:38.937814 containerd[1595]: time="2024-12-13T01:17:38.937745925Z" level=info msg="StartContainer for \"5d91f1558bc5de867b0e5d3d3e7078df778a6372951b26b6ac076db2d6e0f243\" returns successfully" Dec 13 01:17:38.985627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d91f1558bc5de867b0e5d3d3e7078df778a6372951b26b6ac076db2d6e0f243-rootfs.mount: Deactivated successfully. Dec 13 01:17:39.501197 kubelet[2799]: E1213 01:17:39.500665 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-876cd" podUID="66c052f9-688a-4b76-896e-248573f70c35" Dec 13 01:17:39.589815 containerd[1595]: time="2024-12-13T01:17:39.589722003Z" level=info msg="shim disconnected" id=5d91f1558bc5de867b0e5d3d3e7078df778a6372951b26b6ac076db2d6e0f243 namespace=k8s.io Dec 13 01:17:39.589815 containerd[1595]: time="2024-12-13T01:17:39.589809335Z" level=warning msg="cleaning up after shim disconnected" id=5d91f1558bc5de867b0e5d3d3e7078df778a6372951b26b6ac076db2d6e0f243 namespace=k8s.io Dec 13 01:17:39.589815 containerd[1595]: time="2024-12-13T01:17:39.589823124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:39.616138 containerd[1595]: time="2024-12-13T01:17:39.616084034Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:17:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:17:39.624085 kubelet[2799]: I1213 01:17:39.624046 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:17:39.626631 containerd[1595]: time="2024-12-13T01:17:39.626497892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:17:41.500342 kubelet[2799]: E1213 01:17:41.500183 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-876cd" podUID="66c052f9-688a-4b76-896e-248573f70c35" Dec 13 01:17:43.500730 kubelet[2799]: E1213 01:17:43.500658 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-876cd" podUID="66c052f9-688a-4b76-896e-248573f70c35" Dec 13 01:17:43.537008 containerd[1595]: time="2024-12-13T01:17:43.536947031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:43.538413 containerd[1595]: time="2024-12-13T01:17:43.538341277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:17:43.539817 containerd[1595]: time="2024-12-13T01:17:43.539739574Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:43.542974 containerd[1595]: time="2024-12-13T01:17:43.542909853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:43.544114 containerd[1595]: time="2024-12-13T01:17:43.544034903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.917481775s" Dec 13 01:17:43.544114 containerd[1595]: time="2024-12-13T01:17:43.544080633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:17:43.547272 containerd[1595]: time="2024-12-13T01:17:43.547131111Z" level=info msg="CreateContainer within sandbox \"98bcc335c23d7ee656ccedb0f593b8dfbbba7e19f77bd4cad18a3b8989f25402\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:17:43.567076 containerd[1595]: time="2024-12-13T01:17:43.567032046Z" level=info msg="CreateContainer within sandbox \"98bcc335c23d7ee656ccedb0f593b8dfbbba7e19f77bd4cad18a3b8989f25402\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"536894cd111c7c0e64ce4698845b01bada8cad7fd8746c8fdaa70ec0427936f7\"" Dec 13 01:17:43.567766 containerd[1595]: time="2024-12-13T01:17:43.567583181Z" level=info msg="StartContainer for \"536894cd111c7c0e64ce4698845b01bada8cad7fd8746c8fdaa70ec0427936f7\"" Dec 13 01:17:43.662986 containerd[1595]: time="2024-12-13T01:17:43.662886076Z" level=info msg="StartContainer for \"536894cd111c7c0e64ce4698845b01bada8cad7fd8746c8fdaa70ec0427936f7\" returns successfully" Dec 13 01:17:44.639092 containerd[1595]: time="2024-12-13T01:17:44.638866137Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:17:44.696568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536894cd111c7c0e64ce4698845b01bada8cad7fd8746c8fdaa70ec0427936f7-rootfs.mount: Deactivated successfully. Dec 13 01:17:44.700547 kubelet[2799]: I1213 01:17:44.699567 2799 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:17:44.733486 kubelet[2799]: I1213 01:17:44.729665 2799 topology_manager.go:215] "Topology Admit Handler" podUID="0b515d89-e330-44b5-a936-07964075e0de" podNamespace="kube-system" podName="coredns-76f75df574-kmgns" Dec 13 01:17:44.736922 kubelet[2799]: I1213 01:17:44.735509 2799 topology_manager.go:215] "Topology Admit Handler" podUID="8fbdcc41-c83a-49a1-b4de-a5d542202b50" podNamespace="kube-system" podName="coredns-76f75df574-hwqr9" Dec 13 01:17:44.754386 kubelet[2799]: I1213 01:17:44.754352 2799 topology_manager.go:215] "Topology Admit Handler" podUID="6dceeefa-4cd7-42f8-ab1a-b8b45660c051" podNamespace="calico-apiserver" podName="calico-apiserver-7b6fb557cc-wgc99" Dec 13 01:17:44.754595 kubelet[2799]: I1213 01:17:44.754573 2799 topology_manager.go:215] "Topology Admit Handler" podUID="c3498958-dd32-4646-b429-46dcbb7deac3" podNamespace="calico-system" podName="calico-kube-controllers-59fdf5598b-5hh5d" Dec 13 01:17:44.754862 kubelet[2799]: I1213 01:17:44.754754 2799 topology_manager.go:215] "Topology Admit Handler" podUID="b2425adc-9348-4776-a9e2-adeda1a66be5" podNamespace="calico-apiserver" podName="calico-apiserver-7b6fb557cc-4xz7k" Dec 13 01:17:44.859509 kubelet[2799]: I1213 01:17:44.859420 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxdz\" (UniqueName: \"kubernetes.io/projected/0b515d89-e330-44b5-a936-07964075e0de-kube-api-access-7dxdz\") pod \"coredns-76f75df574-kmgns\" (UID: \"0b515d89-e330-44b5-a936-07964075e0de\") " pod="kube-system/coredns-76f75df574-kmgns" Dec 13 01:17:44.859509 kubelet[2799]: I1213 01:17:44.859524 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b2425adc-9348-4776-a9e2-adeda1a66be5-calico-apiserver-certs\") pod \"calico-apiserver-7b6fb557cc-4xz7k\" (UID: \"b2425adc-9348-4776-a9e2-adeda1a66be5\") " pod="calico-apiserver/calico-apiserver-7b6fb557cc-4xz7k" Dec 13 01:17:44.859779 kubelet[2799]: I1213 01:17:44.859564 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcss7\" (UniqueName: \"kubernetes.io/projected/b2425adc-9348-4776-a9e2-adeda1a66be5-kube-api-access-jcss7\") pod \"calico-apiserver-7b6fb557cc-4xz7k\" (UID: \"b2425adc-9348-4776-a9e2-adeda1a66be5\") " pod="calico-apiserver/calico-apiserver-7b6fb557cc-4xz7k" Dec 13 01:17:44.859779 kubelet[2799]: I1213 01:17:44.859598 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88wx7\" (UniqueName: \"kubernetes.io/projected/6dceeefa-4cd7-42f8-ab1a-b8b45660c051-kube-api-access-88wx7\") pod \"calico-apiserver-7b6fb557cc-wgc99\" (UID: \"6dceeefa-4cd7-42f8-ab1a-b8b45660c051\") " pod="calico-apiserver/calico-apiserver-7b6fb557cc-wgc99" Dec 13 01:17:44.859779 kubelet[2799]: I1213 01:17:44.859636 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2l2\" (UniqueName: \"kubernetes.io/projected/c3498958-dd32-4646-b429-46dcbb7deac3-kube-api-access-fg2l2\") pod \"calico-kube-controllers-59fdf5598b-5hh5d\" (UID: \"c3498958-dd32-4646-b429-46dcbb7deac3\") " pod="calico-system/calico-kube-controllers-59fdf5598b-5hh5d" Dec 13 01:17:44.859779 kubelet[2799]: I1213 01:17:44.859692 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj24\" (UniqueName: \"kubernetes.io/projected/8fbdcc41-c83a-49a1-b4de-a5d542202b50-kube-api-access-ctj24\") pod \"coredns-76f75df574-hwqr9\" (UID: \"8fbdcc41-c83a-49a1-b4de-a5d542202b50\") " pod="kube-system/coredns-76f75df574-hwqr9" Dec 13 01:17:44.859779 kubelet[2799]: I1213 01:17:44.859728 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b515d89-e330-44b5-a936-07964075e0de-config-volume\") pod \"coredns-76f75df574-kmgns\" (UID: \"0b515d89-e330-44b5-a936-07964075e0de\") " pod="kube-system/coredns-76f75df574-kmgns" Dec 13 01:17:44.860091 kubelet[2799]: I1213 01:17:44.859767 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fbdcc41-c83a-49a1-b4de-a5d542202b50-config-volume\") pod \"coredns-76f75df574-hwqr9\" (UID: \"8fbdcc41-c83a-49a1-b4de-a5d542202b50\") " pod="kube-system/coredns-76f75df574-hwqr9" Dec 13 01:17:44.860091 kubelet[2799]: I1213 01:17:44.859808 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3498958-dd32-4646-b429-46dcbb7deac3-tigera-ca-bundle\") pod \"calico-kube-controllers-59fdf5598b-5hh5d\" (UID: \"c3498958-dd32-4646-b429-46dcbb7deac3\") " pod="calico-system/calico-kube-controllers-59fdf5598b-5hh5d" Dec 13 01:17:44.860091 kubelet[2799]: I1213 01:17:44.859851 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6dceeefa-4cd7-42f8-ab1a-b8b45660c051-calico-apiserver-certs\") pod \"calico-apiserver-7b6fb557cc-wgc99\" (UID: \"6dceeefa-4cd7-42f8-ab1a-b8b45660c051\") " pod="calico-apiserver/calico-apiserver-7b6fb557cc-wgc99" Dec 13 01:17:45.039256 containerd[1595]: time="2024-12-13T01:17:45.039201971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kmgns,Uid:0b515d89-e330-44b5-a936-07964075e0de,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:45.061224 containerd[1595]: time="2024-12-13T01:17:45.060856386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-4xz7k,Uid:b2425adc-9348-4776-a9e2-adeda1a66be5,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:17:45.061682 containerd[1595]: time="2024-12-13T01:17:45.061576011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwqr9,Uid:8fbdcc41-c83a-49a1-b4de-a5d542202b50,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:45.068501 containerd[1595]: time="2024-12-13T01:17:45.068464284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-wgc99,Uid:6dceeefa-4cd7-42f8-ab1a-b8b45660c051,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:17:45.073405 containerd[1595]: time="2024-12-13T01:17:45.073352169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdf5598b-5hh5d,Uid:c3498958-dd32-4646-b429-46dcbb7deac3,Namespace:calico-system,Attempt:0,}" Dec 13 01:17:45.458466 containerd[1595]: time="2024-12-13T01:17:45.458392878Z" level=info msg="shim disconnected" id=536894cd111c7c0e64ce4698845b01bada8cad7fd8746c8fdaa70ec0427936f7 namespace=k8s.io Dec 13 01:17:45.458466 containerd[1595]: time="2024-12-13T01:17:45.458458437Z" level=warning msg="cleaning up after shim disconnected" id=536894cd111c7c0e64ce4698845b01bada8cad7fd8746c8fdaa70ec0427936f7 namespace=k8s.io Dec 13 01:17:45.458466 containerd[1595]: time="2024-12-13T01:17:45.458470953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:45.519617 containerd[1595]: time="2024-12-13T01:17:45.519232264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-876cd,Uid:66c052f9-688a-4b76-896e-248573f70c35,Namespace:calico-system,Attempt:0,}" Dec 13 01:17:45.701943 containerd[1595]: time="2024-12-13T01:17:45.701363581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:17:45.827868 containerd[1595]: time="2024-12-13T01:17:45.827239027Z" level=error msg="Failed to destroy network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.830852 containerd[1595]: time="2024-12-13T01:17:45.830620016Z" level=error msg="encountered an error cleaning up failed sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.835287 containerd[1595]: time="2024-12-13T01:17:45.834598662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-wgc99,Uid:6dceeefa-4cd7-42f8-ab1a-b8b45660c051,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.835971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148-shm.mount: Deactivated successfully. Dec 13 01:17:45.837705 kubelet[2799]: E1213 01:17:45.837583 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.837705 kubelet[2799]: E1213 01:17:45.837676 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6fb557cc-wgc99" Dec 13 01:17:45.838967 kubelet[2799]: E1213 01:17:45.837715 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6fb557cc-wgc99" Dec 13 01:17:45.838967 kubelet[2799]: E1213 01:17:45.837814 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6fb557cc-wgc99_calico-apiserver(6dceeefa-4cd7-42f8-ab1a-b8b45660c051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6fb557cc-wgc99_calico-apiserver(6dceeefa-4cd7-42f8-ab1a-b8b45660c051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6fb557cc-wgc99" podUID="6dceeefa-4cd7-42f8-ab1a-b8b45660c051" Dec 13 01:17:45.858210 containerd[1595]: time="2024-12-13T01:17:45.858079699Z" level=error msg="Failed to destroy network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.860040 containerd[1595]: time="2024-12-13T01:17:45.859097279Z" level=error msg="encountered an error cleaning up failed sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.860040 containerd[1595]: time="2024-12-13T01:17:45.859996287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-4xz7k,Uid:b2425adc-9348-4776-a9e2-adeda1a66be5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.860438 kubelet[2799]: E1213 01:17:45.860394 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.860562 kubelet[2799]: E1213 01:17:45.860476 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6fb557cc-4xz7k" Dec 13 01:17:45.860562 kubelet[2799]: E1213 01:17:45.860521 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6fb557cc-4xz7k" Dec 13 01:17:45.862114 kubelet[2799]: E1213 01:17:45.862087 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6fb557cc-4xz7k_calico-apiserver(b2425adc-9348-4776-a9e2-adeda1a66be5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6fb557cc-4xz7k_calico-apiserver(b2425adc-9348-4776-a9e2-adeda1a66be5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6fb557cc-4xz7k" podUID="b2425adc-9348-4776-a9e2-adeda1a66be5" Dec 13 01:17:45.866583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3-shm.mount: Deactivated successfully. Dec 13 01:17:45.885437 containerd[1595]: time="2024-12-13T01:17:45.885372518Z" level=error msg="Failed to destroy network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.887381 containerd[1595]: time="2024-12-13T01:17:45.887332935Z" level=error msg="encountered an error cleaning up failed sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.887516 containerd[1595]: time="2024-12-13T01:17:45.887430223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwqr9,Uid:8fbdcc41-c83a-49a1-b4de-a5d542202b50,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.888937 kubelet[2799]: E1213 01:17:45.888123 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.889304 kubelet[2799]: E1213 01:17:45.889060 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hwqr9" Dec 13 01:17:45.889304 kubelet[2799]: E1213 01:17:45.889122 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hwqr9" Dec 13 01:17:45.889304 kubelet[2799]: E1213 01:17:45.889258 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hwqr9_kube-system(8fbdcc41-c83a-49a1-b4de-a5d542202b50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hwqr9_kube-system(8fbdcc41-c83a-49a1-b4de-a5d542202b50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hwqr9" podUID="8fbdcc41-c83a-49a1-b4de-a5d542202b50" Dec 13 01:17:45.892639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208-shm.mount: Deactivated successfully. Dec 13 01:17:45.907119 containerd[1595]: time="2024-12-13T01:17:45.907070939Z" level=error msg="Failed to destroy network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.911942 containerd[1595]: time="2024-12-13T01:17:45.911152824Z" level=error msg="encountered an error cleaning up failed sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.911942 containerd[1595]: time="2024-12-13T01:17:45.911243296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kmgns,Uid:0b515d89-e330-44b5-a936-07964075e0de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.913121 kubelet[2799]: E1213 01:17:45.911507 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.913121 kubelet[2799]: E1213 01:17:45.911573 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kmgns" Dec 13 01:17:45.913121 kubelet[2799]: E1213 01:17:45.911607 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kmgns" Dec 13 01:17:45.912772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054-shm.mount: Deactivated successfully. Dec 13 01:17:45.913482 containerd[1595]: time="2024-12-13T01:17:45.912329498Z" level=error msg="Failed to destroy network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.913544 kubelet[2799]: E1213 01:17:45.911683 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kmgns_kube-system(0b515d89-e330-44b5-a936-07964075e0de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kmgns_kube-system(0b515d89-e330-44b5-a936-07964075e0de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kmgns" podUID="0b515d89-e330-44b5-a936-07964075e0de" Dec 13 01:17:45.915922 containerd[1595]: time="2024-12-13T01:17:45.914598599Z" level=error msg="encountered an error cleaning up failed sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.915922 containerd[1595]: time="2024-12-13T01:17:45.914667787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-876cd,Uid:66c052f9-688a-4b76-896e-248573f70c35,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.916479 kubelet[2799]: E1213 01:17:45.916215 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.916479 kubelet[2799]: E1213 01:17:45.916270 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-876cd" Dec 13 01:17:45.916479 kubelet[2799]: E1213 01:17:45.916308 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-876cd" Dec 13 01:17:45.916700 kubelet[2799]: E1213 01:17:45.916378 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-876cd_calico-system(66c052f9-688a-4b76-896e-248573f70c35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-876cd_calico-system(66c052f9-688a-4b76-896e-248573f70c35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-876cd" podUID="66c052f9-688a-4b76-896e-248573f70c35" Dec 13 01:17:45.918962 containerd[1595]: time="2024-12-13T01:17:45.918888122Z" level=error msg="Failed to destroy network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.919326 containerd[1595]: time="2024-12-13T01:17:45.919286123Z" level=error msg="encountered an error cleaning up failed sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.919426 containerd[1595]: time="2024-12-13T01:17:45.919355198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdf5598b-5hh5d,Uid:c3498958-dd32-4646-b429-46dcbb7deac3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.919638 kubelet[2799]: E1213 01:17:45.919616 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:45.919761 kubelet[2799]: E1213 01:17:45.919674 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59fdf5598b-5hh5d" Dec 13 01:17:45.919761 kubelet[2799]: E1213 01:17:45.919719 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59fdf5598b-5hh5d" Dec 13 01:17:45.919881 kubelet[2799]: E1213 01:17:45.919790 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59fdf5598b-5hh5d_calico-system(c3498958-dd32-4646-b429-46dcbb7deac3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59fdf5598b-5hh5d_calico-system(c3498958-dd32-4646-b429-46dcbb7deac3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59fdf5598b-5hh5d" podUID="c3498958-dd32-4646-b429-46dcbb7deac3" Dec 13 01:17:46.681422 kubelet[2799]: I1213 01:17:46.681390 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:17:46.683924 containerd[1595]: time="2024-12-13T01:17:46.683846838Z" level=info msg="StopPodSandbox for \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\"" Dec 13 01:17:46.686305 containerd[1595]: time="2024-12-13T01:17:46.686271232Z" level=info msg="Ensure that sandbox eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88 in task-service has been cleanup successfully" Dec 13 01:17:46.698185 kubelet[2799]: I1213 01:17:46.694765 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:17:46.698922 containerd[1595]: time="2024-12-13T01:17:46.698804876Z" level=info msg="StopPodSandbox for \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\"" Dec 13 01:17:46.699059 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4-shm.mount: Deactivated successfully. Dec 13 01:17:46.699735 containerd[1595]: time="2024-12-13T01:17:46.699231406Z" level=info msg="Ensure that sandbox c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054 in task-service has been cleanup successfully" Dec 13 01:17:46.700493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88-shm.mount: Deactivated successfully. Dec 13 01:17:46.708948 kubelet[2799]: I1213 01:17:46.708819 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:17:46.711148 containerd[1595]: time="2024-12-13T01:17:46.711111984Z" level=info msg="StopPodSandbox for \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\"" Dec 13 01:17:46.712408 containerd[1595]: time="2024-12-13T01:17:46.711986041Z" level=info msg="Ensure that sandbox 75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4 in task-service has been cleanup successfully" Dec 13 01:17:46.720362 kubelet[2799]: I1213 01:17:46.720332 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:17:46.724695 containerd[1595]: time="2024-12-13T01:17:46.723988173Z" level=info msg="StopPodSandbox for \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\"" Dec 13 01:17:46.726692 containerd[1595]: time="2024-12-13T01:17:46.726630617Z" level=info msg="Ensure that sandbox 17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148 in task-service has been cleanup successfully" Dec 13 01:17:46.732332 kubelet[2799]: I1213 01:17:46.732063 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:17:46.737882 containerd[1595]: time="2024-12-13T01:17:46.737815376Z" level=info msg="StopPodSandbox for \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\"" Dec 13 01:17:46.738148 containerd[1595]: time="2024-12-13T01:17:46.738101072Z" level=info msg="Ensure that sandbox c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208 in task-service has been cleanup successfully" Dec 13 01:17:46.757701 kubelet[2799]: I1213 01:17:46.757068 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:17:46.758257 containerd[1595]: time="2024-12-13T01:17:46.758200584Z" level=info msg="StopPodSandbox for \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\"" Dec 13 01:17:46.758924 containerd[1595]: time="2024-12-13T01:17:46.758434309Z" level=info msg="Ensure that sandbox 28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3 in task-service has been cleanup successfully" Dec 13 01:17:46.866399 containerd[1595]: time="2024-12-13T01:17:46.866340010Z" level=error msg="StopPodSandbox for \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\" failed" error="failed to destroy network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:46.867169 kubelet[2799]: E1213 01:17:46.867105 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:17:46.868932 kubelet[2799]: E1213 01:17:46.867999 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054"} Dec 13 01:17:46.868932 kubelet[2799]: E1213 01:17:46.868072 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b515d89-e330-44b5-a936-07964075e0de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:17:46.868932 kubelet[2799]: E1213 01:17:46.868136 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b515d89-e330-44b5-a936-07964075e0de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kmgns" podUID="0b515d89-e330-44b5-a936-07964075e0de" Dec 13 01:17:46.884445 containerd[1595]: time="2024-12-13T01:17:46.884381865Z" level=error msg="StopPodSandbox for \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\" failed" error="failed to destroy network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:46.884923 kubelet[2799]: E1213 01:17:46.884874 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:17:46.885924 kubelet[2799]: E1213 01:17:46.885150 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4"} Dec 13 01:17:46.885924 kubelet[2799]: E1213 01:17:46.885257 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66c052f9-688a-4b76-896e-248573f70c35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:17:46.885924 kubelet[2799]: E1213 01:17:46.885312 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66c052f9-688a-4b76-896e-248573f70c35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-876cd" podUID="66c052f9-688a-4b76-896e-248573f70c35" Dec 13 01:17:46.918212 containerd[1595]: time="2024-12-13T01:17:46.918149517Z" level=error msg="StopPodSandbox for \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\" failed" error="failed to destroy network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:46.918732 kubelet[2799]: E1213 01:17:46.918583 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:17:46.918732 kubelet[2799]: E1213 01:17:46.918643 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88"} Dec 13 01:17:46.918732 kubelet[2799]: E1213 01:17:46.918697 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3498958-dd32-4646-b429-46dcbb7deac3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:17:46.919201 kubelet[2799]: E1213 01:17:46.918745 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3498958-dd32-4646-b429-46dcbb7deac3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59fdf5598b-5hh5d" podUID="c3498958-dd32-4646-b429-46dcbb7deac3" Dec 13 01:17:46.923171 containerd[1595]: time="2024-12-13T01:17:46.922972974Z" level=error msg="StopPodSandbox for \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\" failed" error="failed to destroy network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:46.924195 kubelet[2799]: E1213 01:17:46.924110 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:17:46.924195 kubelet[2799]: E1213 01:17:46.924171 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148"} Dec 13 01:17:46.924195 kubelet[2799]: E1213 01:17:46.924231 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dceeefa-4cd7-42f8-ab1a-b8b45660c051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:17:46.925289 kubelet[2799]: E1213 01:17:46.924274 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dceeefa-4cd7-42f8-ab1a-b8b45660c051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6fb557cc-wgc99" podUID="6dceeefa-4cd7-42f8-ab1a-b8b45660c051" Dec 13 01:17:46.925289 kubelet[2799]: E1213 01:17:46.925033 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:17:46.925289 kubelet[2799]: E1213 01:17:46.925070 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3"} Dec 13 01:17:46.925289 kubelet[2799]: E1213 01:17:46.925123 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2425adc-9348-4776-a9e2-adeda1a66be5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:17:46.925612 containerd[1595]: time="2024-12-13T01:17:46.924756944Z" level=error msg="StopPodSandbox for \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\" failed" error="failed to destroy network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:46.925677 kubelet[2799]: E1213 01:17:46.925173 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2425adc-9348-4776-a9e2-adeda1a66be5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6fb557cc-4xz7k" podUID="b2425adc-9348-4776-a9e2-adeda1a66be5" Dec 13 01:17:46.937074 containerd[1595]: time="2024-12-13T01:17:46.933806332Z" level=error msg="StopPodSandbox for \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\" failed" error="failed to destroy network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:17:46.937202 kubelet[2799]: E1213 01:17:46.934092 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:17:46.937202 kubelet[2799]: E1213 01:17:46.934138 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208"} Dec 13 01:17:46.937202 kubelet[2799]: E1213 01:17:46.934196 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fbdcc41-c83a-49a1-b4de-a5d542202b50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:17:46.937202 kubelet[2799]: E1213 01:17:46.934242 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fbdcc41-c83a-49a1-b4de-a5d542202b50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hwqr9" podUID="8fbdcc41-c83a-49a1-b4de-a5d542202b50" Dec 13 01:17:52.311718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656606801.mount: Deactivated successfully. Dec 13 01:17:52.349217 containerd[1595]: time="2024-12-13T01:17:52.349126423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:52.350692 containerd[1595]: time="2024-12-13T01:17:52.350613308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:17:52.352084 containerd[1595]: time="2024-12-13T01:17:52.352013363Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:52.355302 containerd[1595]: time="2024-12-13T01:17:52.355216544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:52.356826 containerd[1595]: time="2024-12-13T01:17:52.356185111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.63814965s" Dec 13 01:17:52.356826 containerd[1595]: time="2024-12-13T01:17:52.356232368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:17:52.380486 containerd[1595]: time="2024-12-13T01:17:52.380127182Z" level=info msg="CreateContainer within sandbox \"98bcc335c23d7ee656ccedb0f593b8dfbbba7e19f77bd4cad18a3b8989f25402\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:17:52.405438 containerd[1595]: time="2024-12-13T01:17:52.405382038Z" level=info msg="CreateContainer within sandbox \"98bcc335c23d7ee656ccedb0f593b8dfbbba7e19f77bd4cad18a3b8989f25402\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"915069dbf93e944ae5a607d196ea4a16fbddc7a6d94f65fadb17e918ca679b59\"" Dec 13 01:17:52.407791 containerd[1595]: time="2024-12-13T01:17:52.407744082Z" level=info msg="StartContainer for \"915069dbf93e944ae5a607d196ea4a16fbddc7a6d94f65fadb17e918ca679b59\"" Dec 13 01:17:52.491152 containerd[1595]: time="2024-12-13T01:17:52.491098717Z" level=info msg="StartContainer for \"915069dbf93e944ae5a607d196ea4a16fbddc7a6d94f65fadb17e918ca679b59\" returns successfully" Dec 13 01:17:52.529942 kubelet[2799]: I1213 01:17:52.526390 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:17:52.632492 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:17:52.632656 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:17:53.778966 kubelet[2799]: I1213 01:17:53.778928 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:17:54.518267 kernel: bpftool[4059]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:17:54.784192 systemd-networkd[1220]: vxlan.calico: Link UP Dec 13 01:17:54.784209 systemd-networkd[1220]: vxlan.calico: Gained carrier Dec 13 01:17:56.067629 systemd-networkd[1220]: vxlan.calico: Gained IPv6LL Dec 13 01:17:57.502780 containerd[1595]: time="2024-12-13T01:17:57.501435754Z" level=info msg="StopPodSandbox for \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\"" Dec 13 01:17:57.561840 kubelet[2799]: I1213 01:17:57.561286 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-c2ts7" podStartSLOduration=6.287586088 podStartE2EDuration="22.561220938s" podCreationTimestamp="2024-12-13 01:17:35 +0000 UTC" firstStartedPulling="2024-12-13 01:17:36.082978497 +0000 UTC m=+23.780395596" lastFinishedPulling="2024-12-13 01:17:52.356613347 +0000 UTC m=+40.054030446" observedRunningTime="2024-12-13 01:17:52.803516909 +0000 UTC m=+40.500934024" watchObservedRunningTime="2024-12-13 01:17:57.561220938 +0000 UTC m=+45.258638053" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.558 [INFO][4143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.560 [INFO][4143] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" iface="eth0" netns="/var/run/netns/cni-edeb865d-a70d-f7a6-44d6-6140facfc011" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.561 [INFO][4143] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" iface="eth0" netns="/var/run/netns/cni-edeb865d-a70d-f7a6-44d6-6140facfc011" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.561 [INFO][4143] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" iface="eth0" netns="/var/run/netns/cni-edeb865d-a70d-f7a6-44d6-6140facfc011" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.561 [INFO][4143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.562 [INFO][4143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.587 [INFO][4149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.588 [INFO][4149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.588 [INFO][4149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.598 [WARNING][4149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.598 [INFO][4149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.600 [INFO][4149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:17:57.604363 containerd[1595]: 2024-12-13 01:17:57.602 [INFO][4143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:17:57.605130 containerd[1595]: time="2024-12-13T01:17:57.605000353Z" level=info msg="TearDown network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\" successfully" Dec 13 01:17:57.605130 containerd[1595]: time="2024-12-13T01:17:57.605044160Z" level=info msg="StopPodSandbox for \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\" returns successfully" Dec 13 01:17:57.606849 containerd[1595]: time="2024-12-13T01:17:57.606806255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwqr9,Uid:8fbdcc41-c83a-49a1-b4de-a5d542202b50,Namespace:kube-system,Attempt:1,}" Dec 13 01:17:57.613194 systemd[1]: run-netns-cni\x2dedeb865d\x2da70d\x2df7a6\x2d44d6\x2d6140facfc011.mount: Deactivated successfully. Dec 13 01:17:57.759730 systemd-networkd[1220]: calie9c79aa35f2: Link UP Dec 13 01:17:57.760115 systemd-networkd[1220]: calie9c79aa35f2: Gained carrier Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.672 [INFO][4155] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0 coredns-76f75df574- kube-system 8fbdcc41-c83a-49a1-b4de-a5d542202b50 776 0 2024-12-13 01:17:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal coredns-76f75df574-hwqr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9c79aa35f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.672 [INFO][4155] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.711 [INFO][4166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" HandleID="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.722 [INFO][4166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" HandleID="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed3d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", "pod":"coredns-76f75df574-hwqr9", "timestamp":"2024-12-13 01:17:57.711576946 +0000 UTC"}, Hostname:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.722 [INFO][4166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.722 [INFO][4166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.722 [INFO][4166] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal' Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.724 [INFO][4166] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.729 [INFO][4166] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.734 [INFO][4166] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.735 [INFO][4166] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.738 [INFO][4166] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.738 [INFO][4166] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.739 [INFO][4166] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4 Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.745 [INFO][4166] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.752 [INFO][4166] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.65/26] block=192.168.80.64/26 handle="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.752 [INFO][4166] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.65/26] handle="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.752 [INFO][4166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:17:57.781055 containerd[1595]: 2024-12-13 01:17:57.752 [INFO][4166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.65/26] IPv6=[] ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" HandleID="k8s-pod-network.effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.785399 containerd[1595]: 2024-12-13 01:17:57.754 [INFO][4155] cni-plugin/k8s.go 386: Populated endpoint ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fbdcc41-c83a-49a1-b4de-a5d542202b50", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-hwqr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c79aa35f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:17:57.785399 containerd[1595]: 2024-12-13 01:17:57.754 [INFO][4155] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.65/32] ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.785399 containerd[1595]: 2024-12-13 01:17:57.754 [INFO][4155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9c79aa35f2 ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.785399 containerd[1595]: 2024-12-13 01:17:57.757 [INFO][4155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.785399 containerd[1595]: 2024-12-13 01:17:57.758 [INFO][4155] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fbdcc41-c83a-49a1-b4de-a5d542202b50", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4", Pod:"coredns-76f75df574-hwqr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c79aa35f2", MAC:"1a:0b:73:9d:1b:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:17:57.785399 containerd[1595]: 2024-12-13 01:17:57.776 [INFO][4155] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4" Namespace="kube-system" Pod="coredns-76f75df574-hwqr9" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:17:57.823677 containerd[1595]: time="2024-12-13T01:17:57.822463004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:57.823677 containerd[1595]: time="2024-12-13T01:17:57.822531451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:57.823677 containerd[1595]: time="2024-12-13T01:17:57.822550812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:57.823677 containerd[1595]: time="2024-12-13T01:17:57.822679393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:57.907092 containerd[1595]: time="2024-12-13T01:17:57.907036676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwqr9,Uid:8fbdcc41-c83a-49a1-b4de-a5d542202b50,Namespace:kube-system,Attempt:1,} returns sandbox id \"effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4\"" Dec 13 01:17:57.910925 containerd[1595]: time="2024-12-13T01:17:57.910852977Z" level=info msg="CreateContainer within sandbox \"effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:57.924701 containerd[1595]: time="2024-12-13T01:17:57.924649018Z" level=info msg="CreateContainer within sandbox \"effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b71f42c233a4443866d788318219e742c74d818e8f24a193670cd3f20953bf8\"" Dec 13 01:17:57.926479 containerd[1595]: time="2024-12-13T01:17:57.925408155Z" level=info msg="StartContainer for \"9b71f42c233a4443866d788318219e742c74d818e8f24a193670cd3f20953bf8\"" Dec 13 01:17:57.990240 containerd[1595]: time="2024-12-13T01:17:57.990177114Z" level=info msg="StartContainer for \"9b71f42c233a4443866d788318219e742c74d818e8f24a193670cd3f20953bf8\" returns successfully" Dec 13 01:17:58.503991 containerd[1595]: time="2024-12-13T01:17:58.502929193Z" level=info msg="StopPodSandbox for \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\"" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.566 [INFO][4278] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.566 [INFO][4278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" iface="eth0" netns="/var/run/netns/cni-024d4562-4ee3-0cca-8dbd-d42b810a88ba" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.567 [INFO][4278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" iface="eth0" netns="/var/run/netns/cni-024d4562-4ee3-0cca-8dbd-d42b810a88ba" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.568 [INFO][4278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" iface="eth0" netns="/var/run/netns/cni-024d4562-4ee3-0cca-8dbd-d42b810a88ba" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.568 [INFO][4278] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.568 [INFO][4278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.596 [INFO][4284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.596 [INFO][4284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.596 [INFO][4284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.603 [WARNING][4284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.603 [INFO][4284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.606 [INFO][4284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:17:58.616509 containerd[1595]: 2024-12-13 01:17:58.612 [INFO][4278] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:17:58.620050 containerd[1595]: time="2024-12-13T01:17:58.616641332Z" level=info msg="TearDown network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\" successfully" Dec 13 01:17:58.620050 containerd[1595]: time="2024-12-13T01:17:58.616677430Z" level=info msg="StopPodSandbox for \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\" returns successfully" Dec 13 01:17:58.621552 containerd[1595]: time="2024-12-13T01:17:58.620867310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-wgc99,Uid:6dceeefa-4cd7-42f8-ab1a-b8b45660c051,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:17:58.624229 systemd[1]: run-netns-cni\x2d024d4562\x2d4ee3\x2d0cca\x2d8dbd\x2dd42b810a88ba.mount: Deactivated successfully. Dec 13 01:17:58.769032 systemd-networkd[1220]: cali1d4d87df705: Link UP Dec 13 01:17:58.773815 systemd-networkd[1220]: cali1d4d87df705: Gained carrier Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.689 [INFO][4291] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0 calico-apiserver-7b6fb557cc- calico-apiserver 6dceeefa-4cd7-42f8-ab1a-b8b45660c051 786 0 2024-12-13 01:17:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6fb557cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal calico-apiserver-7b6fb557cc-wgc99 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d4d87df705 [] []}} ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.689 [INFO][4291] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.723 [INFO][4301] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" HandleID="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.734 [INFO][4301] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" HandleID="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", "pod":"calico-apiserver-7b6fb557cc-wgc99", "timestamp":"2024-12-13 01:17:58.723267797 +0000 UTC"}, Hostname:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.734 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.734 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.734 [INFO][4301] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal' Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.736 [INFO][4301] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.740 [INFO][4301] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.744 [INFO][4301] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.746 [INFO][4301] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.748 [INFO][4301] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.748 [INFO][4301] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.750 [INFO][4301] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.755 [INFO][4301] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.761 [INFO][4301] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.66/26] block=192.168.80.64/26 handle="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.761 [INFO][4301] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.66/26] handle="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.762 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:17:58.798631 containerd[1595]: 2024-12-13 01:17:58.762 [INFO][4301] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.66/26] IPv6=[] ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" HandleID="k8s-pod-network.beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.801845 containerd[1595]: 2024-12-13 01:17:58.764 [INFO][4291] cni-plugin/k8s.go 386: Populated endpoint ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dceeefa-4cd7-42f8-ab1a-b8b45660c051", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7b6fb557cc-wgc99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d4d87df705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:17:58.801845 containerd[1595]: 2024-12-13 01:17:58.764 [INFO][4291] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.66/32] ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.801845 containerd[1595]: 2024-12-13 01:17:58.764 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d4d87df705 ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.801845 containerd[1595]: 2024-12-13 01:17:58.768 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.801845 containerd[1595]: 2024-12-13 01:17:58.772 [INFO][4291] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dceeefa-4cd7-42f8-ab1a-b8b45660c051", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed", Pod:"calico-apiserver-7b6fb557cc-wgc99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d4d87df705", MAC:"ca:c1:81:77:53:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:17:58.801845 containerd[1595]: 2024-12-13 01:17:58.790 [INFO][4291] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-wgc99" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:17:58.836351 kubelet[2799]: I1213 01:17:58.832863 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hwqr9" podStartSLOduration=32.832806863 podStartE2EDuration="32.832806863s" podCreationTimestamp="2024-12-13 01:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:58.832537886 +0000 UTC m=+46.529954996" watchObservedRunningTime="2024-12-13 01:17:58.832806863 +0000 UTC m=+46.530223972" Dec 13 01:17:58.866724 containerd[1595]: time="2024-12-13T01:17:58.864157155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:58.866724 containerd[1595]: time="2024-12-13T01:17:58.865732495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:58.866724 containerd[1595]: time="2024-12-13T01:17:58.865754118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:58.866724 containerd[1595]: time="2024-12-13T01:17:58.865879997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:58.972245 containerd[1595]: time="2024-12-13T01:17:58.972201598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-wgc99,Uid:6dceeefa-4cd7-42f8-ab1a-b8b45660c051,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed\"" Dec 13 01:17:58.977726 containerd[1595]: time="2024-12-13T01:17:58.977588036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:17:59.267874 systemd-networkd[1220]: calie9c79aa35f2: Gained IPv6LL Dec 13 01:17:59.843273 systemd-networkd[1220]: cali1d4d87df705: Gained IPv6LL Dec 13 01:18:01.065610 containerd[1595]: time="2024-12-13T01:18:01.065538801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:01.067120 containerd[1595]: time="2024-12-13T01:18:01.067030022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:18:01.068642 containerd[1595]: time="2024-12-13T01:18:01.068570798Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:01.073270 containerd[1595]: time="2024-12-13T01:18:01.072735209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:01.074083 containerd[1595]: time="2024-12-13T01:18:01.074042385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.096381327s" Dec 13 01:18:01.074083 containerd[1595]: time="2024-12-13T01:18:01.074091593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:18:01.077110 containerd[1595]: time="2024-12-13T01:18:01.076925162Z" level=info msg="CreateContainer within sandbox \"beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:18:01.094537 containerd[1595]: time="2024-12-13T01:18:01.094490969Z" level=info msg="CreateContainer within sandbox \"beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"785f6c75316dd7a44a4d0347c23fb9c4ea15a8c03f1589c4d39604e352441631\"" Dec 13 01:18:01.097066 containerd[1595]: time="2024-12-13T01:18:01.095083740Z" level=info msg="StartContainer for \"785f6c75316dd7a44a4d0347c23fb9c4ea15a8c03f1589c4d39604e352441631\"" Dec 13 01:18:01.196279 containerd[1595]: time="2024-12-13T01:18:01.196203891Z" level=info msg="StartContainer for \"785f6c75316dd7a44a4d0347c23fb9c4ea15a8c03f1589c4d39604e352441631\" returns successfully" Dec 13 01:18:01.282700 kubelet[2799]: I1213 01:18:01.282620 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:18:01.504423 containerd[1595]: time="2024-12-13T01:18:01.504242055Z" level=info msg="StopPodSandbox for \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\"" Dec 13 01:18:01.505249 containerd[1595]: time="2024-12-13T01:18:01.505212429Z" level=info msg="StopPodSandbox for \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\"" Dec 13 01:18:01.508367 containerd[1595]: time="2024-12-13T01:18:01.507815565Z" level=info msg="StopPodSandbox for \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\"" Dec 13 01:18:01.508524 containerd[1595]: time="2024-12-13T01:18:01.508410824Z" level=info msg="StopPodSandbox for \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\"" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.731 [INFO][4511] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.733 [INFO][4511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" iface="eth0" netns="/var/run/netns/cni-9557f701-443b-4966-29fe-5a757a998423" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.735 [INFO][4511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" iface="eth0" netns="/var/run/netns/cni-9557f701-443b-4966-29fe-5a757a998423" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.736 [INFO][4511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" iface="eth0" netns="/var/run/netns/cni-9557f701-443b-4966-29fe-5a757a998423" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.737 [INFO][4511] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.737 [INFO][4511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.959 [INFO][4536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.961 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.962 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.974 [WARNING][4536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.974 [INFO][4536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.977 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:01.987058 containerd[1595]: 2024-12-13 01:18:01.984 [INFO][4511] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:01.990002 containerd[1595]: time="2024-12-13T01:18:01.987764793Z" level=info msg="TearDown network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\" successfully" Dec 13 01:18:01.990002 containerd[1595]: time="2024-12-13T01:18:01.987860102Z" level=info msg="StopPodSandbox for \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\" returns successfully" Dec 13 01:18:01.990591 containerd[1595]: time="2024-12-13T01:18:01.990557390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-4xz7k,Uid:b2425adc-9348-4776-a9e2-adeda1a66be5,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.762 [INFO][4519] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.764 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" iface="eth0" netns="/var/run/netns/cni-05afb6fb-e8b2-aa5c-8560-9ccbe6ba2d5d" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.766 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" iface="eth0" netns="/var/run/netns/cni-05afb6fb-e8b2-aa5c-8560-9ccbe6ba2d5d" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.768 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" iface="eth0" netns="/var/run/netns/cni-05afb6fb-e8b2-aa5c-8560-9ccbe6ba2d5d" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.768 [INFO][4519] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.768 [INFO][4519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.981 [INFO][4541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.984 [INFO][4541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:01.984 [INFO][4541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:02.005 [WARNING][4541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:02.006 [INFO][4541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:02.010 [INFO][4541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:02.025779 containerd[1595]: 2024-12-13 01:18:02.016 [INFO][4519] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:02.025779 containerd[1595]: time="2024-12-13T01:18:02.025612788Z" level=info msg="TearDown network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\" successfully" Dec 13 01:18:02.025779 containerd[1595]: time="2024-12-13T01:18:02.025660356Z" level=info msg="StopPodSandbox for \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\" returns successfully" Dec 13 01:18:02.029907 containerd[1595]: time="2024-12-13T01:18:02.028107855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-876cd,Uid:66c052f9-688a-4b76-896e-248573f70c35,Namespace:calico-system,Attempt:1,}" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:01.772 [INFO][4512] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:01.774 [INFO][4512] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" iface="eth0" netns="/var/run/netns/cni-faa2f55a-9b51-46de-643b-73c38cd53a6d" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:01.775 [INFO][4512] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" iface="eth0" netns="/var/run/netns/cni-faa2f55a-9b51-46de-643b-73c38cd53a6d" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:01.779 [INFO][4512] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" iface="eth0" netns="/var/run/netns/cni-faa2f55a-9b51-46de-643b-73c38cd53a6d" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:01.779 [INFO][4512] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:01.779 [INFO][4512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:02.001 [INFO][4542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:02.003 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:02.011 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:02.022 [WARNING][4542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:02.022 [INFO][4542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:02.027 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:02.055826 containerd[1595]: 2024-12-13 01:18:02.047 [INFO][4512] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:02.057877 containerd[1595]: time="2024-12-13T01:18:02.056486851Z" level=info msg="TearDown network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\" successfully" Dec 13 01:18:02.057877 containerd[1595]: time="2024-12-13T01:18:02.056546378Z" level=info msg="StopPodSandbox for \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\" returns successfully" Dec 13 01:18:02.061254 containerd[1595]: time="2024-12-13T01:18:02.060850091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdf5598b-5hh5d,Uid:c3498958-dd32-4646-b429-46dcbb7deac3,Namespace:calico-system,Attempt:1,}" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:01.817 [INFO][4520] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:01.818 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" iface="eth0" netns="/var/run/netns/cni-325caba1-44b0-8240-33f6-8156a3cbb7de" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:01.819 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" iface="eth0" netns="/var/run/netns/cni-325caba1-44b0-8240-33f6-8156a3cbb7de" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:01.820 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" iface="eth0" netns="/var/run/netns/cni-325caba1-44b0-8240-33f6-8156a3cbb7de" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:01.821 [INFO][4520] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:01.821 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:02.036 [INFO][4549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:02.036 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:02.037 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:02.052 [WARNING][4549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:02.052 [INFO][4549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:02.057 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:02.069699 containerd[1595]: 2024-12-13 01:18:02.065 [INFO][4520] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:02.071453 containerd[1595]: time="2024-12-13T01:18:02.070132813Z" level=info msg="TearDown network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\" successfully" Dec 13 01:18:02.071453 containerd[1595]: time="2024-12-13T01:18:02.070166162Z" level=info msg="StopPodSandbox for \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\" returns successfully" Dec 13 01:18:02.075713 containerd[1595]: time="2024-12-13T01:18:02.074941241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kmgns,Uid:0b515d89-e330-44b5-a936-07964075e0de,Namespace:kube-system,Attempt:1,}" Dec 13 01:18:02.103556 systemd[1]: run-netns-cni\x2d05afb6fb\x2de8b2\x2daa5c\x2d8560\x2d9ccbe6ba2d5d.mount: Deactivated successfully. Dec 13 01:18:02.104153 systemd[1]: run-netns-cni\x2dfaa2f55a\x2d9b51\x2d46de\x2d643b\x2d73c38cd53a6d.mount: Deactivated successfully. Dec 13 01:18:02.104321 systemd[1]: run-netns-cni\x2d325caba1\x2d44b0\x2d8240\x2d33f6\x2d8156a3cbb7de.mount: Deactivated successfully. Dec 13 01:18:02.104480 systemd[1]: run-netns-cni\x2d9557f701\x2d443b\x2d4966\x2d29fe\x2d5a757a998423.mount: Deactivated successfully. Dec 13 01:18:02.428263 ntpd[1539]: Listen normally on 6 vxlan.calico 192.168.80.64:123 Dec 13 01:18:02.430565 ntpd[1539]: 13 Dec 01:18:02 ntpd[1539]: Listen normally on 6 vxlan.calico 192.168.80.64:123 Dec 13 01:18:02.430565 ntpd[1539]: 13 Dec 01:18:02 ntpd[1539]: Listen normally on 7 vxlan.calico [fe80::64b5:91ff:fe02:c606%4]:123 Dec 13 01:18:02.430565 ntpd[1539]: 13 Dec 01:18:02 ntpd[1539]: Listen normally on 8 calie9c79aa35f2 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:18:02.430565 ntpd[1539]: 13 Dec 01:18:02 ntpd[1539]: Listen normally on 9 cali1d4d87df705 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:18:02.429745 ntpd[1539]: Listen normally on 7 vxlan.calico [fe80::64b5:91ff:fe02:c606%4]:123 Dec 13 01:18:02.429845 ntpd[1539]: Listen normally on 8 calie9c79aa35f2 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:18:02.429919 ntpd[1539]: Listen normally on 9 cali1d4d87df705 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:18:02.581802 systemd-networkd[1220]: calidae3dbb6794: Link UP Dec 13 01:18:02.585872 systemd-networkd[1220]: calidae3dbb6794: Gained carrier Dec 13 01:18:02.622120 kubelet[2799]: I1213 01:18:02.622075 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b6fb557cc-wgc99" podStartSLOduration=25.524609157 podStartE2EDuration="27.622008703s" podCreationTimestamp="2024-12-13 01:17:35 +0000 UTC" firstStartedPulling="2024-12-13 01:17:58.977038262 +0000 UTC m=+46.674455358" lastFinishedPulling="2024-12-13 01:18:01.074437814 +0000 UTC m=+48.771854904" observedRunningTime="2024-12-13 01:18:01.865848018 +0000 UTC m=+49.563265130" watchObservedRunningTime="2024-12-13 01:18:02.622008703 +0000 UTC m=+50.319425898" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.235 [INFO][4563] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0 calico-apiserver-7b6fb557cc- calico-apiserver b2425adc-9348-4776-a9e2-adeda1a66be5 814 0 2024-12-13 01:17:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6fb557cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal calico-apiserver-7b6fb557cc-4xz7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidae3dbb6794 [] []}} ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.241 [INFO][4563] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.438 [INFO][4611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" HandleID="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.456 [INFO][4611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" HandleID="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f5790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", "pod":"calico-apiserver-7b6fb557cc-4xz7k", "timestamp":"2024-12-13 01:18:02.438145323 +0000 UTC"}, Hostname:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.458 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.458 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.459 [INFO][4611] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal' Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.463 [INFO][4611] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.481 [INFO][4611] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.495 [INFO][4611] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.502 [INFO][4611] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.514 [INFO][4611] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.514 [INFO][4611] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.525 [INFO][4611] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67 Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.536 [INFO][4611] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.556 [INFO][4611] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.67/26] block=192.168.80.64/26 handle="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.557 [INFO][4611] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.67/26] handle="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.557 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:02.626420 containerd[1595]: 2024-12-13 01:18:02.558 [INFO][4611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.67/26] IPv6=[] ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" HandleID="k8s-pod-network.91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:02.628678 containerd[1595]: 2024-12-13 01:18:02.567 [INFO][4563] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2425adc-9348-4776-a9e2-adeda1a66be5", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7b6fb557cc-4xz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidae3dbb6794", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:02.628678 containerd[1595]: 2024-12-13 01:18:02.569 [INFO][4563] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.67/32] ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:02.628678 containerd[1595]: 2024-12-13 01:18:02.570 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidae3dbb6794 ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:02.628678 containerd[1595]: 2024-12-13 01:18:02.586 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:02.628678 containerd[1595]: 2024-12-13 01:18:02.588 [INFO][4563] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2425adc-9348-4776-a9e2-adeda1a66be5", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67", Pod:"calico-apiserver-7b6fb557cc-4xz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidae3dbb6794", MAC:"46:60:2d:50:5a:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:02.628678 containerd[1595]: 2024-12-13 01:18:02.620 [INFO][4563] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67" Namespace="calico-apiserver" Pod="calico-apiserver-7b6fb557cc-4xz7k" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:02.708132 systemd-networkd[1220]: caliae840dfac7c: Link UP Dec 13 01:18:02.708550 systemd-networkd[1220]: caliae840dfac7c: Gained carrier Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.303 [INFO][4574] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0 csi-node-driver- calico-system 66c052f9-688a-4b76-896e-248573f70c35 815 0 2024-12-13 01:17:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal csi-node-driver-876cd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliae840dfac7c [] []}} ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.304 [INFO][4574] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.520 [INFO][4616] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" HandleID="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.563 [INFO][4616] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" HandleID="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fea60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", "pod":"csi-node-driver-876cd", "timestamp":"2024-12-13 01:18:02.518030256 +0000 UTC"}, Hostname:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.563 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.563 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.563 [INFO][4616] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal' Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.573 [INFO][4616] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.596 [INFO][4616] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.618 [INFO][4616] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.639 [INFO][4616] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.650 [INFO][4616] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.651 [INFO][4616] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.655 [INFO][4616] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.667 [INFO][4616] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.677 [INFO][4616] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.68/26] block=192.168.80.64/26 handle="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.677 [INFO][4616] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.68/26] handle="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.678 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:02.751702 containerd[1595]: 2024-12-13 01:18:02.678 [INFO][4616] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.68/26] IPv6=[] ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" HandleID="k8s-pod-network.5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.756280 containerd[1595]: 2024-12-13 01:18:02.687 [INFO][4574] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66c052f9-688a-4b76-896e-248573f70c35", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-876cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae840dfac7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:02.756280 containerd[1595]: 2024-12-13 01:18:02.687 [INFO][4574] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.68/32] ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.756280 containerd[1595]: 2024-12-13 01:18:02.687 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae840dfac7c ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.756280 containerd[1595]: 2024-12-13 01:18:02.711 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.756280 containerd[1595]: 2024-12-13 01:18:02.714 [INFO][4574] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66c052f9-688a-4b76-896e-248573f70c35", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac", Pod:"csi-node-driver-876cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae840dfac7c", MAC:"2e:1b:b7:f1:67:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:02.756280 containerd[1595]: 2024-12-13 01:18:02.742 [INFO][4574] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac" Namespace="calico-system" Pod="csi-node-driver-876cd" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:02.776497 containerd[1595]: time="2024-12-13T01:18:02.775910253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:02.776497 containerd[1595]: time="2024-12-13T01:18:02.776005640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:02.776497 containerd[1595]: time="2024-12-13T01:18:02.776033707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:02.779540 containerd[1595]: time="2024-12-13T01:18:02.778518941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:02.800830 systemd-networkd[1220]: cali2913dba4915: Link UP Dec 13 01:18:02.803641 systemd-networkd[1220]: cali2913dba4915: Gained carrier Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.407 [INFO][4587] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0 coredns-76f75df574- kube-system 0b515d89-e330-44b5-a936-07964075e0de 817 0 2024-12-13 01:17:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal coredns-76f75df574-kmgns eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2913dba4915 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.408 [INFO][4587] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.572 [INFO][4627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" HandleID="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.609 [INFO][4627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" HandleID="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039f9e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", "pod":"coredns-76f75df574-kmgns", "timestamp":"2024-12-13 01:18:02.572325081 +0000 UTC"}, Hostname:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.611 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.678 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.682 [INFO][4627] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal' Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.695 [INFO][4627] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.714 [INFO][4627] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.740 [INFO][4627] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.743 [INFO][4627] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.748 [INFO][4627] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.748 [INFO][4627] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.751 [INFO][4627] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.762 [INFO][4627] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.778 [INFO][4627] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.69/26] block=192.168.80.64/26 handle="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.779 [INFO][4627] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.69/26] handle="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.779 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:02.854934 containerd[1595]: 2024-12-13 01:18:02.779 [INFO][4627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.69/26] IPv6=[] ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" HandleID="k8s-pod-network.57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.860203 containerd[1595]: 2024-12-13 01:18:02.783 [INFO][4587] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b515d89-e330-44b5-a936-07964075e0de", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-kmgns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2913dba4915", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:02.860203 containerd[1595]: 2024-12-13 01:18:02.788 [INFO][4587] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.69/32] ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.860203 containerd[1595]: 2024-12-13 01:18:02.790 [INFO][4587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2913dba4915 ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.860203 containerd[1595]: 2024-12-13 01:18:02.803 [INFO][4587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.860203 containerd[1595]: 2024-12-13 01:18:02.805 [INFO][4587] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b515d89-e330-44b5-a936-07964075e0de", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd", Pod:"coredns-76f75df574-kmgns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2913dba4915", MAC:"82:ca:49:53:c8:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:02.860203 containerd[1595]: 2024-12-13 01:18:02.845 [INFO][4587] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd" Namespace="kube-system" Pod="coredns-76f75df574-kmgns" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:02.937177 containerd[1595]: time="2024-12-13T01:18:02.929825666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:02.937177 containerd[1595]: time="2024-12-13T01:18:02.929949658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:02.937177 containerd[1595]: time="2024-12-13T01:18:02.929982809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:02.938508 containerd[1595]: time="2024-12-13T01:18:02.937180966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:03.047473 containerd[1595]: time="2024-12-13T01:18:03.043938357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:03.047473 containerd[1595]: time="2024-12-13T01:18:03.044054150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:03.047473 containerd[1595]: time="2024-12-13T01:18:03.044076669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:03.047473 containerd[1595]: time="2024-12-13T01:18:03.044399444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:03.084595 systemd-networkd[1220]: calie78417b08d5: Link UP Dec 13 01:18:03.096227 systemd-networkd[1220]: calie78417b08d5: Gained carrier Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.429 [INFO][4590] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0 calico-kube-controllers-59fdf5598b- calico-system c3498958-dd32-4646-b429-46dcbb7deac3 816 0 2024-12-13 01:17:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59fdf5598b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal calico-kube-controllers-59fdf5598b-5hh5d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie78417b08d5 [] []}} ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.430 [INFO][4590] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.582 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" HandleID="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.633 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" HandleID="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", "pod":"calico-kube-controllers-59fdf5598b-5hh5d", "timestamp":"2024-12-13 01:18:02.582565976 +0000 UTC"}, Hostname:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.637 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.780 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.781 [INFO][4628] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal' Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.815 [INFO][4628] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.858 [INFO][4628] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.932 [INFO][4628] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.946 [INFO][4628] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.959 [INFO][4628] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.961 [INFO][4628] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.965 [INFO][4628] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2 Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:02.978 [INFO][4628] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:03.001 [INFO][4628] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.70/26] block=192.168.80.64/26 handle="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:03.004 [INFO][4628] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.70/26] handle="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" host="ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal" Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:03.005 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:03.158609 containerd[1595]: 2024-12-13 01:18:03.007 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.70/26] IPv6=[] ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" HandleID="k8s-pod-network.b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:03.162560 containerd[1595]: 2024-12-13 01:18:03.051 [INFO][4590] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0", GenerateName:"calico-kube-controllers-59fdf5598b-", Namespace:"calico-system", SelfLink:"", UID:"c3498958-dd32-4646-b429-46dcbb7deac3", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdf5598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-59fdf5598b-5hh5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie78417b08d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:03.162560 containerd[1595]: 2024-12-13 01:18:03.051 [INFO][4590] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.70/32] ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:03.162560 containerd[1595]: 2024-12-13 01:18:03.051 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie78417b08d5 ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:03.162560 containerd[1595]: 2024-12-13 01:18:03.117 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:03.162560 containerd[1595]: 2024-12-13 01:18:03.121 [INFO][4590] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0", GenerateName:"calico-kube-controllers-59fdf5598b-", Namespace:"calico-system", SelfLink:"", UID:"c3498958-dd32-4646-b429-46dcbb7deac3", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdf5598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2", Pod:"calico-kube-controllers-59fdf5598b-5hh5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie78417b08d5", MAC:"86:3b:fb:de:17:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:03.162560 containerd[1595]: 2024-12-13 01:18:03.147 [INFO][4590] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2" Namespace="calico-system" Pod="calico-kube-controllers-59fdf5598b-5hh5d" WorkloadEndpoint="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:03.237962 containerd[1595]: time="2024-12-13T01:18:03.237643848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-876cd,Uid:66c052f9-688a-4b76-896e-248573f70c35,Namespace:calico-system,Attempt:1,} returns sandbox id \"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac\"" Dec 13 01:18:03.251231 containerd[1595]: time="2024-12-13T01:18:03.251011676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:18:03.299066 containerd[1595]: time="2024-12-13T01:18:03.297975121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:03.299945 containerd[1595]: time="2024-12-13T01:18:03.299683503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:03.299945 containerd[1595]: time="2024-12-13T01:18:03.299770848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:03.302636 containerd[1595]: time="2024-12-13T01:18:03.301331618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:03.308699 containerd[1595]: time="2024-12-13T01:18:03.308443196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kmgns,Uid:0b515d89-e330-44b5-a936-07964075e0de,Namespace:kube-system,Attempt:1,} returns sandbox id \"57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd\"" Dec 13 01:18:03.339212 containerd[1595]: time="2024-12-13T01:18:03.339036910Z" level=info msg="CreateContainer within sandbox \"57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:18:03.394266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3732158138.mount: Deactivated successfully. Dec 13 01:18:03.409340 containerd[1595]: time="2024-12-13T01:18:03.408335543Z" level=info msg="CreateContainer within sandbox \"57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"883b9b6bb2225ed94cbd582a737d755545a21175a7352c67d79ae3b3e23e3402\"" Dec 13 01:18:03.415381 containerd[1595]: time="2024-12-13T01:18:03.414221733Z" level=info msg="StartContainer for \"883b9b6bb2225ed94cbd582a737d755545a21175a7352c67d79ae3b3e23e3402\"" Dec 13 01:18:03.454430 containerd[1595]: time="2024-12-13T01:18:03.454384634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6fb557cc-4xz7k,Uid:b2425adc-9348-4776-a9e2-adeda1a66be5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67\"" Dec 13 01:18:03.466245 containerd[1595]: time="2024-12-13T01:18:03.466201693Z" level=info msg="CreateContainer within sandbox \"91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:18:03.492516 containerd[1595]: time="2024-12-13T01:18:03.492470002Z" level=info msg="CreateContainer within sandbox \"91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ca02d4966a63941c829c49db977bc3fdb87bbb29838340a8b85e8ce13d19d0d\"" Dec 13 01:18:03.494149 containerd[1595]: time="2024-12-13T01:18:03.494113502Z" level=info msg="StartContainer for \"2ca02d4966a63941c829c49db977bc3fdb87bbb29838340a8b85e8ce13d19d0d\"" Dec 13 01:18:03.602187 containerd[1595]: time="2024-12-13T01:18:03.601417819Z" level=info msg="StartContainer for \"883b9b6bb2225ed94cbd582a737d755545a21175a7352c67d79ae3b3e23e3402\" returns successfully" Dec 13 01:18:03.707438 containerd[1595]: time="2024-12-13T01:18:03.707332254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59fdf5598b-5hh5d,Uid:c3498958-dd32-4646-b429-46dcbb7deac3,Namespace:calico-system,Attempt:1,} returns sandbox id \"b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2\"" Dec 13 01:18:03.719865 containerd[1595]: time="2024-12-13T01:18:03.719614969Z" level=info msg="StartContainer for \"2ca02d4966a63941c829c49db977bc3fdb87bbb29838340a8b85e8ce13d19d0d\" returns successfully" Dec 13 01:18:03.937003 kubelet[2799]: I1213 01:18:03.936955 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b6fb557cc-4xz7k" podStartSLOduration=28.936875808 podStartE2EDuration="28.936875808s" podCreationTimestamp="2024-12-13 01:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:03.911036009 +0000 UTC m=+51.608453120" watchObservedRunningTime="2024-12-13 01:18:03.936875808 +0000 UTC m=+51.634292919" Dec 13 01:18:03.940241 systemd-networkd[1220]: caliae840dfac7c: Gained IPv6LL Dec 13 01:18:03.940687 systemd-networkd[1220]: cali2913dba4915: Gained IPv6LL Dec 13 01:18:04.196287 systemd-networkd[1220]: calidae3dbb6794: Gained IPv6LL Dec 13 01:18:04.802390 containerd[1595]: time="2024-12-13T01:18:04.801841555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:04.804169 containerd[1595]: time="2024-12-13T01:18:04.804121240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:18:04.806037 containerd[1595]: time="2024-12-13T01:18:04.805926065Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:04.811456 containerd[1595]: time="2024-12-13T01:18:04.811011056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:04.812815 containerd[1595]: time="2024-12-13T01:18:04.812781760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.561367858s" Dec 13 01:18:04.813043 containerd[1595]: time="2024-12-13T01:18:04.813017808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:18:04.814173 containerd[1595]: time="2024-12-13T01:18:04.814139699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:18:04.820758 containerd[1595]: time="2024-12-13T01:18:04.820719120Z" level=info msg="CreateContainer within sandbox \"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:18:04.852818 containerd[1595]: time="2024-12-13T01:18:04.852767201Z" level=info msg="CreateContainer within sandbox \"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f262a2340905931e75014773a38a856ac74b32a8675e81cfd0004fe2f9324464\"" Dec 13 01:18:04.854249 containerd[1595]: time="2024-12-13T01:18:04.854212538Z" level=info msg="StartContainer for \"f262a2340905931e75014773a38a856ac74b32a8675e81cfd0004fe2f9324464\"" Dec 13 01:18:04.900960 systemd-networkd[1220]: calie78417b08d5: Gained IPv6LL Dec 13 01:18:04.908501 kubelet[2799]: I1213 01:18:04.908434 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:18:05.000192 containerd[1595]: time="2024-12-13T01:18:05.000046634Z" level=info msg="StartContainer for \"f262a2340905931e75014773a38a856ac74b32a8675e81cfd0004fe2f9324464\" returns successfully" Dec 13 01:18:07.039388 containerd[1595]: time="2024-12-13T01:18:07.039327770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:07.041395 containerd[1595]: time="2024-12-13T01:18:07.040944693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:18:07.043385 containerd[1595]: time="2024-12-13T01:18:07.043167071Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:07.051843 containerd[1595]: time="2024-12-13T01:18:07.051770603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:07.053871 containerd[1595]: time="2024-12-13T01:18:07.053799293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.239612752s" Dec 13 01:18:07.054094 containerd[1595]: time="2024-12-13T01:18:07.054052053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:18:07.055978 containerd[1595]: time="2024-12-13T01:18:07.055948088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:18:07.091012 containerd[1595]: time="2024-12-13T01:18:07.090193782Z" level=info msg="CreateContainer within sandbox \"b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:18:07.114059 containerd[1595]: time="2024-12-13T01:18:07.113996497Z" level=info msg="CreateContainer within sandbox \"b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"928e2918410d9a1265830cae7a85d3c436b577dae16cd36171645de16912c72c\"" Dec 13 01:18:07.115963 containerd[1595]: time="2024-12-13T01:18:07.115025855Z" level=info msg="StartContainer for \"928e2918410d9a1265830cae7a85d3c436b577dae16cd36171645de16912c72c\"" Dec 13 01:18:07.238094 containerd[1595]: time="2024-12-13T01:18:07.237999961Z" level=info msg="StartContainer for \"928e2918410d9a1265830cae7a85d3c436b577dae16cd36171645de16912c72c\" returns successfully" Dec 13 01:18:07.426833 ntpd[1539]: Listen normally on 10 calidae3dbb6794 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:18:07.427444 ntpd[1539]: 13 Dec 01:18:07 ntpd[1539]: Listen normally on 10 calidae3dbb6794 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:18:07.427444 ntpd[1539]: 13 Dec 01:18:07 ntpd[1539]: Listen normally on 11 caliae840dfac7c [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:18:07.427444 ntpd[1539]: 13 Dec 01:18:07 ntpd[1539]: Listen normally on 12 cali2913dba4915 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:18:07.427444 ntpd[1539]: 13 Dec 01:18:07 ntpd[1539]: Listen normally on 13 calie78417b08d5 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:18:07.426986 ntpd[1539]: Listen normally on 11 caliae840dfac7c [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:18:07.427053 ntpd[1539]: Listen normally on 12 cali2913dba4915 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:18:07.427109 ntpd[1539]: Listen normally on 13 calie78417b08d5 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:18:07.979849 kubelet[2799]: I1213 01:18:07.979553 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59fdf5598b-5hh5d" podStartSLOduration=29.634476103 podStartE2EDuration="32.979427337s" podCreationTimestamp="2024-12-13 01:17:35 +0000 UTC" firstStartedPulling="2024-12-13 01:18:03.709767873 +0000 UTC m=+51.407184974" lastFinishedPulling="2024-12-13 01:18:07.054719111 +0000 UTC m=+54.752136208" observedRunningTime="2024-12-13 01:18:07.977058841 +0000 UTC m=+55.674475952" watchObservedRunningTime="2024-12-13 01:18:07.979427337 +0000 UTC m=+55.676844448" Dec 13 01:18:07.980555 kubelet[2799]: I1213 01:18:07.980345 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kmgns" podStartSLOduration=41.980229579 podStartE2EDuration="41.980229579s" podCreationTimestamp="2024-12-13 01:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:03.937694578 +0000 UTC m=+51.635111945" watchObservedRunningTime="2024-12-13 01:18:07.980229579 +0000 UTC m=+55.677646693" Dec 13 01:18:08.313298 containerd[1595]: time="2024-12-13T01:18:08.312833779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:08.316616 containerd[1595]: time="2024-12-13T01:18:08.315017237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:18:08.316616 containerd[1595]: time="2024-12-13T01:18:08.315996409Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:08.322956 containerd[1595]: time="2024-12-13T01:18:08.322838238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:08.324652 containerd[1595]: time="2024-12-13T01:18:08.324481338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.268325188s" Dec 13 01:18:08.324652 containerd[1595]: time="2024-12-13T01:18:08.324532198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:18:08.327864 containerd[1595]: time="2024-12-13T01:18:08.327826364Z" level=info msg="CreateContainer within sandbox \"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:18:08.347040 containerd[1595]: time="2024-12-13T01:18:08.346168469Z" level=info msg="CreateContainer within sandbox \"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"71be5f20e1748686a2fe6d1e449d744640a8d28c61f55e75cf4b80e8c2ae06e3\"" Dec 13 01:18:08.348455 containerd[1595]: time="2024-12-13T01:18:08.348404976Z" level=info msg="StartContainer for \"71be5f20e1748686a2fe6d1e449d744640a8d28c61f55e75cf4b80e8c2ae06e3\"" Dec 13 01:18:08.454467 containerd[1595]: time="2024-12-13T01:18:08.454396675Z" level=info msg="StartContainer for \"71be5f20e1748686a2fe6d1e449d744640a8d28c61f55e75cf4b80e8c2ae06e3\" returns successfully" Dec 13 01:18:08.646631 kubelet[2799]: I1213 01:18:08.646244 2799 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:18:08.646631 kubelet[2799]: I1213 01:18:08.646303 2799 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:18:10.201574 systemd[1]: Started sshd@7-10.128.0.87:22-147.75.109.163:59842.service - OpenSSH per-connection server daemon (147.75.109.163:59842). Dec 13 01:18:10.488075 sshd[5095]: Accepted publickey for core from 147.75.109.163 port 59842 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:10.488945 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:10.506813 systemd-logind[1571]: New session 8 of user core. Dec 13 01:18:10.509263 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:18:10.788686 sshd[5095]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:10.795870 systemd[1]: sshd@7-10.128.0.87:22-147.75.109.163:59842.service: Deactivated successfully. Dec 13 01:18:10.802819 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:18:10.802997 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:18:10.809176 systemd-logind[1571]: Removed session 8. Dec 13 01:18:11.688268 systemd[1]: Started sshd@8-10.128.0.87:22-103.237.144.204:38544.service - OpenSSH per-connection server daemon (103.237.144.204:38544). Dec 13 01:18:12.457771 containerd[1595]: time="2024-12-13T01:18:12.457358950Z" level=info msg="StopPodSandbox for \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\"" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.513 [WARNING][5124] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2425adc-9348-4776-a9e2-adeda1a66be5", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67", Pod:"calico-apiserver-7b6fb557cc-4xz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidae3dbb6794", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.514 [INFO][5124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.514 [INFO][5124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" iface="eth0" netns="" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.514 [INFO][5124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.514 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.545 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.546 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.546 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.553 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.554 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.555 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:12.558022 containerd[1595]: 2024-12-13 01:18:12.556 [INFO][5124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.559381 containerd[1595]: time="2024-12-13T01:18:12.558071391Z" level=info msg="TearDown network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\" successfully" Dec 13 01:18:12.559381 containerd[1595]: time="2024-12-13T01:18:12.558103335Z" level=info msg="StopPodSandbox for \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\" returns successfully" Dec 13 01:18:12.559381 containerd[1595]: time="2024-12-13T01:18:12.558874893Z" level=info msg="RemovePodSandbox for \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\"" Dec 13 01:18:12.559381 containerd[1595]: time="2024-12-13T01:18:12.559008177Z" level=info msg="Forcibly stopping sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\"" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.604 [WARNING][5150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2425adc-9348-4776-a9e2-adeda1a66be5", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"91bb8953f9ef7dcdc376e98f5d8355d9ee1d5a5a25c02f34a55b920e67821b67", Pod:"calico-apiserver-7b6fb557cc-4xz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidae3dbb6794", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.605 [INFO][5150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.605 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" iface="eth0" netns="" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.605 [INFO][5150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.605 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.671 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.672 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.672 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.690 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.692 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" HandleID="k8s-pod-network.28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--4xz7k-eth0" Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.702 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:12.709331 containerd[1595]: 2024-12-13 01:18:12.706 [INFO][5150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3" Dec 13 01:18:12.709331 containerd[1595]: time="2024-12-13T01:18:12.709204294Z" level=info msg="TearDown network for sandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\" successfully" Dec 13 01:18:12.718980 containerd[1595]: time="2024-12-13T01:18:12.718289537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:18:12.719111 containerd[1595]: time="2024-12-13T01:18:12.719039508Z" level=info msg="RemovePodSandbox \"28918bf72fb9f290df53409c3bddea42e28b1599c5c08b2822c86828fd86aee3\" returns successfully" Dec 13 01:18:12.722918 containerd[1595]: time="2024-12-13T01:18:12.720305623Z" level=info msg="StopPodSandbox for \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\"" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.816 [WARNING][5175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66c052f9-688a-4b76-896e-248573f70c35", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac", Pod:"csi-node-driver-876cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae840dfac7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.817 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.817 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" iface="eth0" netns="" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.817 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.817 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.842 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.842 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.842 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.849 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.850 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.852 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:12.855262 containerd[1595]: 2024-12-13 01:18:12.853 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.856693 containerd[1595]: time="2024-12-13T01:18:12.855479849Z" level=info msg="TearDown network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\" successfully" Dec 13 01:18:12.856693 containerd[1595]: time="2024-12-13T01:18:12.855552392Z" level=info msg="StopPodSandbox for \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\" returns successfully" Dec 13 01:18:12.856866 containerd[1595]: time="2024-12-13T01:18:12.856778882Z" level=info msg="RemovePodSandbox for \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\"" Dec 13 01:18:12.856866 containerd[1595]: time="2024-12-13T01:18:12.856817865Z" level=info msg="Forcibly stopping sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\"" Dec 13 01:18:12.929268 sshd[5110]: Invalid user wrxd from 103.237.144.204 port 38544 Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.903 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66c052f9-688a-4b76-896e-248573f70c35", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"5e602112a40d20ffcd685a099b1a3d6e913b9bdf77e6618e37812cd96dd138ac", Pod:"csi-node-driver-876cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae840dfac7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.904 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.904 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" iface="eth0" netns="" Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.904 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.904 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.951 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.951 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.952 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.963 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.964 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" HandleID="k8s-pod-network.75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-csi--node--driver--876cd-eth0" Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.965 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:12.970006 containerd[1595]: 2024-12-13 01:18:12.967 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4" Dec 13 01:18:12.970006 containerd[1595]: time="2024-12-13T01:18:12.968421959Z" level=info msg="TearDown network for sandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\" successfully" Dec 13 01:18:12.973665 containerd[1595]: time="2024-12-13T01:18:12.973600871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:18:12.973867 containerd[1595]: time="2024-12-13T01:18:12.973686821Z" level=info msg="RemovePodSandbox \"75f0b712655e90854236632069bb2c8720becf6ef695b8bf9aec578c7106b9f4\" returns successfully" Dec 13 01:18:12.974566 containerd[1595]: time="2024-12-13T01:18:12.974466404Z" level=info msg="StopPodSandbox for \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\"" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.024 [WARNING][5224] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b515d89-e330-44b5-a936-07964075e0de", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd", Pod:"coredns-76f75df574-kmgns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2913dba4915", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.025 [INFO][5224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.025 [INFO][5224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" iface="eth0" netns="" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.025 [INFO][5224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.025 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.058 [INFO][5230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.058 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.058 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.067 [WARNING][5230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.068 [INFO][5230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.070 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:13.073295 containerd[1595]: 2024-12-13 01:18:13.071 [INFO][5224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.074289 containerd[1595]: time="2024-12-13T01:18:13.073356335Z" level=info msg="TearDown network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\" successfully" Dec 13 01:18:13.074289 containerd[1595]: time="2024-12-13T01:18:13.073393662Z" level=info msg="StopPodSandbox for \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\" returns successfully" Dec 13 01:18:13.074289 containerd[1595]: time="2024-12-13T01:18:13.074184339Z" level=info msg="RemovePodSandbox for \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\"" Dec 13 01:18:13.074289 containerd[1595]: time="2024-12-13T01:18:13.074222063Z" level=info msg="Forcibly stopping sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\"" Dec 13 01:18:13.135454 sshd[5110]: Received disconnect from 103.237.144.204 port 38544:11: Bye Bye [preauth] Dec 13 01:18:13.135454 sshd[5110]: Disconnected from invalid user wrxd 103.237.144.204 port 38544 [preauth] Dec 13 01:18:13.138626 systemd[1]: sshd@8-10.128.0.87:22-103.237.144.204:38544.service: Deactivated successfully. Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.123 [WARNING][5249] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b515d89-e330-44b5-a936-07964075e0de", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"57bc8150dea62218e91374886744d532be8afbb0056b443bf9264013c0254fdd", Pod:"coredns-76f75df574-kmgns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2913dba4915", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.124 [INFO][5249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.124 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" iface="eth0" netns="" Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.124 [INFO][5249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.124 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.165 [INFO][5255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.165 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.165 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.175 [WARNING][5255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.175 [INFO][5255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" HandleID="k8s-pod-network.c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--kmgns-eth0" Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.180 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:13.183783 containerd[1595]: 2024-12-13 01:18:13.181 [INFO][5249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054" Dec 13 01:18:13.186803 containerd[1595]: time="2024-12-13T01:18:13.183839384Z" level=info msg="TearDown network for sandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\" successfully" Dec 13 01:18:13.189888 containerd[1595]: time="2024-12-13T01:18:13.189659798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:18:13.189888 containerd[1595]: time="2024-12-13T01:18:13.189769222Z" level=info msg="RemovePodSandbox \"c4ae7ff9c92b37f33f5b3844a0299b3c94d28a049d5edec6ce2e7d73ad788054\" returns successfully" Dec 13 01:18:13.190463 containerd[1595]: time="2024-12-13T01:18:13.190431134Z" level=info msg="StopPodSandbox for \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\"" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.261 [WARNING][5276] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fbdcc41-c83a-49a1-b4de-a5d542202b50", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4", Pod:"coredns-76f75df574-hwqr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c79aa35f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.261 [INFO][5276] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.261 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" iface="eth0" netns="" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.262 [INFO][5276] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.262 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.298 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.298 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.298 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.304 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.304 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.306 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:13.309009 containerd[1595]: 2024-12-13 01:18:13.307 [INFO][5276] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.309009 containerd[1595]: time="2024-12-13T01:18:13.308967624Z" level=info msg="TearDown network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\" successfully" Dec 13 01:18:13.309009 containerd[1595]: time="2024-12-13T01:18:13.309001435Z" level=info msg="StopPodSandbox for \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\" returns successfully" Dec 13 01:18:13.311048 containerd[1595]: time="2024-12-13T01:18:13.310997388Z" level=info msg="RemovePodSandbox for \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\"" Dec 13 01:18:13.311578 containerd[1595]: time="2024-12-13T01:18:13.311359064Z" level=info msg="Forcibly stopping sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\"" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.359 [WARNING][5300] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fbdcc41-c83a-49a1-b4de-a5d542202b50", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"effa40674e2c7571e4d20442d60578c68883b27fd725acea9496062df27edcf4", Pod:"coredns-76f75df574-hwqr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c79aa35f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.360 [INFO][5300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.360 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" iface="eth0" netns="" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.360 [INFO][5300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.361 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.398 [INFO][5309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.398 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.398 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.405 [WARNING][5309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.405 [INFO][5309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" HandleID="k8s-pod-network.c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-coredns--76f75df574--hwqr9-eth0" Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.407 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:13.409996 containerd[1595]: 2024-12-13 01:18:13.408 [INFO][5300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208" Dec 13 01:18:13.411148 containerd[1595]: time="2024-12-13T01:18:13.409981150Z" level=info msg="TearDown network for sandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\" successfully" Dec 13 01:18:13.599293 containerd[1595]: time="2024-12-13T01:18:13.598728114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:18:13.599293 containerd[1595]: time="2024-12-13T01:18:13.598833458Z" level=info msg="RemovePodSandbox \"c698c386c448c86fe5e7dae0fc6f2aa048b668126df700d3deeb63b77983f208\" returns successfully" Dec 13 01:18:13.601434 containerd[1595]: time="2024-12-13T01:18:13.600765095Z" level=info msg="StopPodSandbox for \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\"" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.663 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dceeefa-4cd7-42f8-ab1a-b8b45660c051", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed", Pod:"calico-apiserver-7b6fb557cc-wgc99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d4d87df705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.664 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.664 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" iface="eth0" netns="" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.664 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.664 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.697 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.697 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.698 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.709 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.709 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.710 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:13.713453 containerd[1595]: 2024-12-13 01:18:13.712 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.715370 containerd[1595]: time="2024-12-13T01:18:13.713519780Z" level=info msg="TearDown network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\" successfully" Dec 13 01:18:13.715370 containerd[1595]: time="2024-12-13T01:18:13.713552794Z" level=info msg="StopPodSandbox for \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\" returns successfully" Dec 13 01:18:13.715370 containerd[1595]: time="2024-12-13T01:18:13.714226476Z" level=info msg="RemovePodSandbox for \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\"" Dec 13 01:18:13.715370 containerd[1595]: time="2024-12-13T01:18:13.714277028Z" level=info msg="Forcibly stopping sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\"" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.765 [WARNING][5353] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0", GenerateName:"calico-apiserver-7b6fb557cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dceeefa-4cd7-42f8-ab1a-b8b45660c051", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6fb557cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"beda747247fbfc94d17248eb78941816e3ae16e3e0875ed652ead324998a39ed", Pod:"calico-apiserver-7b6fb557cc-wgc99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d4d87df705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.765 [INFO][5353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.766 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" iface="eth0" netns="" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.766 [INFO][5353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.766 [INFO][5353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.803 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.803 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.803 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.816 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.817 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" HandleID="k8s-pod-network.17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--apiserver--7b6fb557cc--wgc99-eth0" Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.818 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:13.822031 containerd[1595]: 2024-12-13 01:18:13.820 [INFO][5353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148" Dec 13 01:18:13.822854 containerd[1595]: time="2024-12-13T01:18:13.822053220Z" level=info msg="TearDown network for sandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\" successfully" Dec 13 01:18:14.046959 containerd[1595]: time="2024-12-13T01:18:14.046874947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:18:14.047490 containerd[1595]: time="2024-12-13T01:18:14.047020227Z" level=info msg="RemovePodSandbox \"17c0acbc7912968e62e5148234a520d592b49c8bcb4d0343ff1c3e631eee9148\" returns successfully" Dec 13 01:18:14.047835 containerd[1595]: time="2024-12-13T01:18:14.047686737Z" level=info msg="StopPodSandbox for \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\"" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.094 [WARNING][5378] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0", GenerateName:"calico-kube-controllers-59fdf5598b-", Namespace:"calico-system", SelfLink:"", UID:"c3498958-dd32-4646-b429-46dcbb7deac3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdf5598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2", Pod:"calico-kube-controllers-59fdf5598b-5hh5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie78417b08d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.095 [INFO][5378] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.095 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" iface="eth0" netns="" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.095 [INFO][5378] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.095 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.132 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.133 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.133 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.142 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.142 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.145 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:14.149787 containerd[1595]: 2024-12-13 01:18:14.147 [INFO][5378] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.152112 containerd[1595]: time="2024-12-13T01:18:14.149752413Z" level=info msg="TearDown network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\" successfully" Dec 13 01:18:14.152112 containerd[1595]: time="2024-12-13T01:18:14.150092778Z" level=info msg="StopPodSandbox for \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\" returns successfully" Dec 13 01:18:14.152112 containerd[1595]: time="2024-12-13T01:18:14.151327862Z" level=info msg="RemovePodSandbox for \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\"" Dec 13 01:18:14.152112 containerd[1595]: time="2024-12-13T01:18:14.151367935Z" level=info msg="Forcibly stopping sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\"" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.216 [WARNING][5403] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0", GenerateName:"calico-kube-controllers-59fdf5598b-", Namespace:"calico-system", SelfLink:"", UID:"c3498958-dd32-4646-b429-46dcbb7deac3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59fdf5598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0822cd706bddffe4da67.c.flatcar-212911.internal", ContainerID:"b9d89bfa37d374646dfc6f4cc72224c650444a6acb7b8d723e30f40a986f2ec2", Pod:"calico-kube-controllers-59fdf5598b-5hh5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie78417b08d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.216 [INFO][5403] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.216 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" iface="eth0" netns="" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.216 [INFO][5403] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.216 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.255 [INFO][5409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.256 [INFO][5409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.256 [INFO][5409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.266 [WARNING][5409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.266 [INFO][5409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" HandleID="k8s-pod-network.eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Workload="ci--4081--2--1--0822cd706bddffe4da67.c.flatcar--212911.internal-k8s-calico--kube--controllers--59fdf5598b--5hh5d-eth0" Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.268 [INFO][5409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:18:14.270802 containerd[1595]: 2024-12-13 01:18:14.269 [INFO][5403] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88" Dec 13 01:18:14.271662 containerd[1595]: time="2024-12-13T01:18:14.270883355Z" level=info msg="TearDown network for sandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\" successfully" Dec 13 01:18:14.638357 containerd[1595]: time="2024-12-13T01:18:14.636536173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:18:14.638357 containerd[1595]: time="2024-12-13T01:18:14.636645620Z" level=info msg="RemovePodSandbox \"eb4f30ed6030eebdee3fa623ccd7074697585390058b69f1e0dccadfc1dbbb88\" returns successfully" Dec 13 01:18:15.838471 systemd[1]: Started sshd@9-10.128.0.87:22-147.75.109.163:59848.service - OpenSSH per-connection server daemon (147.75.109.163:59848). Dec 13 01:18:16.139162 sshd[5442]: Accepted publickey for core from 147.75.109.163 port 59848 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:16.141147 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:16.148855 systemd-logind[1571]: New session 9 of user core. Dec 13 01:18:16.155409 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:18:16.480237 sshd[5442]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:16.492193 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:18:16.493418 systemd[1]: sshd@9-10.128.0.87:22-147.75.109.163:59848.service: Deactivated successfully. Dec 13 01:18:16.506158 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:18:16.512250 systemd-logind[1571]: Removed session 9. Dec 13 01:18:21.525275 systemd[1]: Started sshd@10-10.128.0.87:22-147.75.109.163:35418.service - OpenSSH per-connection server daemon (147.75.109.163:35418). Dec 13 01:18:21.823846 sshd[5457]: Accepted publickey for core from 147.75.109.163 port 35418 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:21.825812 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:21.832533 systemd-logind[1571]: New session 10 of user core. Dec 13 01:18:21.838310 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:18:22.114072 sshd[5457]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:22.120788 systemd[1]: sshd@10-10.128.0.87:22-147.75.109.163:35418.service: Deactivated successfully. Dec 13 01:18:22.127131 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:18:22.128527 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:18:22.130212 systemd-logind[1571]: Removed session 10. Dec 13 01:18:22.163841 systemd[1]: Started sshd@11-10.128.0.87:22-147.75.109.163:35434.service - OpenSSH per-connection server daemon (147.75.109.163:35434). Dec 13 01:18:22.466207 sshd[5472]: Accepted publickey for core from 147.75.109.163 port 35434 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:22.468235 sshd[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:22.473881 systemd-logind[1571]: New session 11 of user core. Dec 13 01:18:22.476314 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:18:22.810866 sshd[5472]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:22.818232 systemd[1]: sshd@11-10.128.0.87:22-147.75.109.163:35434.service: Deactivated successfully. Dec 13 01:18:22.823498 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:18:22.823741 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:18:22.826353 systemd-logind[1571]: Removed session 11. Dec 13 01:18:22.858520 systemd[1]: Started sshd@12-10.128.0.87:22-147.75.109.163:35450.service - OpenSSH per-connection server daemon (147.75.109.163:35450). Dec 13 01:18:23.147606 sshd[5484]: Accepted publickey for core from 147.75.109.163 port 35450 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:23.149674 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:23.156029 systemd-logind[1571]: New session 12 of user core. Dec 13 01:18:23.162258 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:18:23.435521 sshd[5484]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:23.441819 systemd[1]: sshd@12-10.128.0.87:22-147.75.109.163:35450.service: Deactivated successfully. Dec 13 01:18:23.447230 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:18:23.448530 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:18:23.449996 systemd-logind[1571]: Removed session 12. Dec 13 01:18:28.485279 systemd[1]: Started sshd@13-10.128.0.87:22-147.75.109.163:46088.service - OpenSSH per-connection server daemon (147.75.109.163:46088). Dec 13 01:18:28.770593 sshd[5506]: Accepted publickey for core from 147.75.109.163 port 46088 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:28.772620 sshd[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.779408 systemd-logind[1571]: New session 13 of user core. Dec 13 01:18:28.785307 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:18:29.057468 sshd[5506]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:29.063794 systemd[1]: sshd@13-10.128.0.87:22-147.75.109.163:46088.service: Deactivated successfully. Dec 13 01:18:29.069547 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:18:29.070849 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:18:29.072388 systemd-logind[1571]: Removed session 13. Dec 13 01:18:31.318719 systemd[1]: run-containerd-runc-k8s.io-915069dbf93e944ae5a607d196ea4a16fbddc7a6d94f65fadb17e918ca679b59-runc.EJRdLD.mount: Deactivated successfully. Dec 13 01:18:34.105300 systemd[1]: Started sshd@14-10.128.0.87:22-147.75.109.163:46102.service - OpenSSH per-connection server daemon (147.75.109.163:46102). Dec 13 01:18:34.394566 sshd[5543]: Accepted publickey for core from 147.75.109.163 port 46102 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:34.396492 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:34.403243 systemd-logind[1571]: New session 14 of user core. Dec 13 01:18:34.408545 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:18:34.692835 sshd[5543]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:34.699037 systemd[1]: sshd@14-10.128.0.87:22-147.75.109.163:46102.service: Deactivated successfully. Dec 13 01:18:34.703793 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:18:34.704021 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:18:34.706962 systemd-logind[1571]: Removed session 14. Dec 13 01:18:39.743302 systemd[1]: Started sshd@15-10.128.0.87:22-147.75.109.163:42398.service - OpenSSH per-connection server daemon (147.75.109.163:42398). Dec 13 01:18:40.036804 sshd[5563]: Accepted publickey for core from 147.75.109.163 port 42398 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:40.038804 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:40.044531 systemd-logind[1571]: New session 15 of user core. Dec 13 01:18:40.049465 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:18:40.330311 sshd[5563]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:40.335639 systemd[1]: sshd@15-10.128.0.87:22-147.75.109.163:42398.service: Deactivated successfully. Dec 13 01:18:40.343387 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:18:40.344482 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:18:40.346484 systemd-logind[1571]: Removed session 15. Dec 13 01:18:40.379646 systemd[1]: Started sshd@16-10.128.0.87:22-147.75.109.163:42404.service - OpenSSH per-connection server daemon (147.75.109.163:42404). Dec 13 01:18:40.671620 sshd[5577]: Accepted publickey for core from 147.75.109.163 port 42404 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:40.673838 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:40.679843 systemd-logind[1571]: New session 16 of user core. Dec 13 01:18:40.686220 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:18:41.067445 sshd[5577]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:41.076091 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:18:41.079678 systemd[1]: sshd@16-10.128.0.87:22-147.75.109.163:42404.service: Deactivated successfully. Dec 13 01:18:41.088412 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:18:41.090297 systemd-logind[1571]: Removed session 16. Dec 13 01:18:41.116273 systemd[1]: Started sshd@17-10.128.0.87:22-147.75.109.163:42418.service - OpenSSH per-connection server daemon (147.75.109.163:42418). Dec 13 01:18:41.408586 sshd[5590]: Accepted publickey for core from 147.75.109.163 port 42418 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:41.410502 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:41.417024 systemd-logind[1571]: New session 17 of user core. Dec 13 01:18:41.421556 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:18:43.132767 kubelet[2799]: I1213 01:18:43.132710 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:18:43.324811 kubelet[2799]: I1213 01:18:43.324769 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-876cd" podStartSLOduration=63.248907139 podStartE2EDuration="1m8.324712724s" podCreationTimestamp="2024-12-13 01:17:35 +0000 UTC" firstStartedPulling="2024-12-13 01:18:03.249243949 +0000 UTC m=+50.946661047" lastFinishedPulling="2024-12-13 01:18:08.325049533 +0000 UTC m=+56.022466632" observedRunningTime="2024-12-13 01:18:08.981885195 +0000 UTC m=+56.679302318" watchObservedRunningTime="2024-12-13 01:18:43.324712724 +0000 UTC m=+91.022129835" Dec 13 01:18:43.830193 sshd[5590]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:43.835276 systemd[1]: sshd@17-10.128.0.87:22-147.75.109.163:42418.service: Deactivated successfully. Dec 13 01:18:43.842381 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:18:43.843603 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:18:43.845747 systemd-logind[1571]: Removed session 17. Dec 13 01:18:43.879945 systemd[1]: Started sshd@18-10.128.0.87:22-147.75.109.163:42434.service - OpenSSH per-connection server daemon (147.75.109.163:42434). Dec 13 01:18:44.171293 sshd[5611]: Accepted publickey for core from 147.75.109.163 port 42434 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:44.173973 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:44.181449 systemd-logind[1571]: New session 18 of user core. Dec 13 01:18:44.186353 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:18:44.822180 sshd[5611]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:44.829881 systemd[1]: sshd@18-10.128.0.87:22-147.75.109.163:42434.service: Deactivated successfully. Dec 13 01:18:44.841279 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:18:44.843511 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:18:44.845298 systemd-logind[1571]: Removed session 18. Dec 13 01:18:44.869530 systemd[1]: Started sshd@19-10.128.0.87:22-147.75.109.163:42450.service - OpenSSH per-connection server daemon (147.75.109.163:42450). Dec 13 01:18:45.116827 systemd[1]: run-containerd-runc-k8s.io-928e2918410d9a1265830cae7a85d3c436b577dae16cd36171645de16912c72c-runc.TBrKIp.mount: Deactivated successfully. Dec 13 01:18:45.183361 sshd[5625]: Accepted publickey for core from 147.75.109.163 port 42450 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:45.186553 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:45.194310 systemd-logind[1571]: New session 19 of user core. Dec 13 01:18:45.201349 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:18:45.503386 sshd[5625]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:45.508164 systemd[1]: sshd@19-10.128.0.87:22-147.75.109.163:42450.service: Deactivated successfully. Dec 13 01:18:45.515787 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:18:45.516379 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:18:45.518503 systemd-logind[1571]: Removed session 19. Dec 13 01:18:50.553554 systemd[1]: Started sshd@20-10.128.0.87:22-147.75.109.163:55542.service - OpenSSH per-connection server daemon (147.75.109.163:55542). Dec 13 01:18:50.847035 sshd[5659]: Accepted publickey for core from 147.75.109.163 port 55542 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:50.849687 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:50.861013 systemd-logind[1571]: New session 20 of user core. Dec 13 01:18:50.865472 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:18:51.137866 sshd[5659]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:51.144211 systemd[1]: sshd@20-10.128.0.87:22-147.75.109.163:55542.service: Deactivated successfully. Dec 13 01:18:51.151622 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:18:51.152686 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:18:51.154210 systemd-logind[1571]: Removed session 20. Dec 13 01:18:56.188545 systemd[1]: Started sshd@21-10.128.0.87:22-147.75.109.163:58250.service - OpenSSH per-connection server daemon (147.75.109.163:58250). Dec 13 01:18:56.482049 sshd[5676]: Accepted publickey for core from 147.75.109.163 port 58250 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:56.483741 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:56.490381 systemd-logind[1571]: New session 21 of user core. Dec 13 01:18:56.496473 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:18:56.770473 sshd[5676]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:56.776006 systemd[1]: sshd@21-10.128.0.87:22-147.75.109.163:58250.service: Deactivated successfully. Dec 13 01:18:56.783175 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:18:56.784313 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:18:56.785722 systemd-logind[1571]: Removed session 21. Dec 13 01:19:01.327851 systemd[1]: run-containerd-runc-k8s.io-915069dbf93e944ae5a607d196ea4a16fbddc7a6d94f65fadb17e918ca679b59-runc.lhQUgn.mount: Deactivated successfully. Dec 13 01:19:01.820595 systemd[1]: Started sshd@22-10.128.0.87:22-147.75.109.163:58266.service - OpenSSH per-connection server daemon (147.75.109.163:58266). Dec 13 01:19:02.105372 sshd[5713]: Accepted publickey for core from 147.75.109.163 port 58266 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:19:02.107121 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:02.113481 systemd-logind[1571]: New session 22 of user core. Dec 13 01:19:02.118308 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:19:02.390743 sshd[5713]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:02.397008 systemd[1]: sshd@22-10.128.0.87:22-147.75.109.163:58266.service: Deactivated successfully. Dec 13 01:19:02.403277 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:19:02.404427 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:19:02.405965 systemd-logind[1571]: Removed session 22. Dec 13 01:19:07.442275 systemd[1]: Started sshd@23-10.128.0.87:22-147.75.109.163:41498.service - OpenSSH per-connection server daemon (147.75.109.163:41498). Dec 13 01:19:07.738609 sshd[5748]: Accepted publickey for core from 147.75.109.163 port 41498 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:19:07.740654 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:07.746550 systemd-logind[1571]: New session 23 of user core. Dec 13 01:19:07.756348 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:19:08.029187 sshd[5748]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:08.038680 systemd[1]: sshd@23-10.128.0.87:22-147.75.109.163:41498.service: Deactivated successfully. Dec 13 01:19:08.048312 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:19:08.051282 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:19:08.054951 systemd-logind[1571]: Removed session 23.