Dec 13 01:30:45.109167 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:30:45.109210 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:45.109229 kernel: BIOS-provided physical RAM map: Dec 13 01:30:45.109242 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:30:45.109255 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:30:45.109268 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:30:45.109284 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:30:45.109302 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:30:45.109316 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:30:45.109330 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:30:45.109344 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:30:45.109376 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:30:45.109391 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:30:45.109406 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:30:45.109428 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:30:45.109456 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:30:45.109473 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:30:45.109490 kernel: NX (Execute Disable) protection: active Dec 13 01:30:45.109506 kernel: APIC: Static calls initialized Dec 13 01:30:45.109523 kernel: efi: EFI v2.7 by EDK II Dec 13 01:30:45.109540 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:30:45.109556 kernel: SMBIOS 2.4 present. Dec 13 01:30:45.109573 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:30:45.109589 kernel: Hypervisor detected: KVM Dec 13 01:30:45.109609 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:30:45.109626 kernel: kvm-clock: using sched offset of 12392448987 cycles Dec 13 01:30:45.109644 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:30:45.109661 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:30:45.109679 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:30:45.109696 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:30:45.109712 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:30:45.109728 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:30:45.109744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:30:45.109764 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:30:45.109781 kernel: Using GB pages for direct mapping Dec 13 01:30:45.109797 kernel: Secure boot disabled Dec 13 01:30:45.109815 kernel: ACPI: Early table checksum verification disabled Dec 13 01:30:45.109830 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:30:45.109847 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:30:45.109864 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:30:45.109888 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:30:45.109907 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:30:45.109925 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:30:45.109943 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:30:45.109961 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:30:45.109978 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:30:45.109996 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:30:45.110018 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:30:45.110036 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:30:45.110055 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:30:45.110072 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:30:45.110090 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:30:45.110107 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:30:45.110126 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:30:45.110144 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:30:45.110170 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:30:45.110192 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:30:45.110210 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:30:45.110229 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:30:45.110247 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:30:45.110265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:30:45.110282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:30:45.110300 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:30:45.110318 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:30:45.110337 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:30:45.110382 kernel: Zone ranges: Dec 13 01:30:45.110401 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:30:45.110420 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:30:45.110438 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:30:45.110463 kernel: Movable zone start for each node Dec 13 01:30:45.110481 kernel: Early memory node ranges Dec 13 01:30:45.110499 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:30:45.110517 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:30:45.110535 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:30:45.110558 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:30:45.110576 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:30:45.110594 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:30:45.110613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:30:45.110631 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:30:45.110649 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:30:45.110668 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:30:45.110686 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:30:45.110703 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:30:45.110726 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:30:45.110744 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:30:45.110761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:30:45.110780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:30:45.110798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:30:45.110817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:30:45.110835 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:30:45.110854 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:30:45.110872 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:30:45.110894 kernel: Booting paravirtualized kernel on KVM Dec 13 01:30:45.110912 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:30:45.110931 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:30:45.110949 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:30:45.110967 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:30:45.110985 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:30:45.111002 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:30:45.111020 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:30:45.111040 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:45.111062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:30:45.111080 kernel: random: crng init done Dec 13 01:30:45.111098 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:30:45.111117 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:30:45.111135 kernel: Fallback order for Node 0: 0 Dec 13 01:30:45.111153 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:30:45.111171 kernel: Policy zone: Normal Dec 13 01:30:45.111189 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:30:45.111209 kernel: software IO TLB: area num 2. Dec 13 01:30:45.111227 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Dec 13 01:30:45.111245 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:30:45.111263 kernel: Kernel/User page tables isolation: enabled Dec 13 01:30:45.111280 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:30:45.111298 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:30:45.111315 kernel: Dynamic Preempt: voluntary Dec 13 01:30:45.111333 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:30:45.111366 kernel: rcu: RCU event tracing is enabled. Dec 13 01:30:45.111401 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:30:45.111418 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:30:45.111436 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:30:45.111466 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:30:45.111483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:30:45.111501 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:30:45.111519 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:30:45.111536 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:30:45.111554 kernel: Console: colour dummy device 80x25 Dec 13 01:30:45.111576 kernel: printk: console [ttyS0] enabled Dec 13 01:30:45.111594 kernel: ACPI: Core revision 20230628 Dec 13 01:30:45.111612 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:30:45.111629 kernel: x2apic enabled Dec 13 01:30:45.111648 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:30:45.111665 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:30:45.111684 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:30:45.111702 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:30:45.111724 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:30:45.111742 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:30:45.111761 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:30:45.111779 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:30:45.111798 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:30:45.111816 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:30:45.111835 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:30:45.111853 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:30:45.111872 kernel: RETBleed: Mitigation: IBRS Dec 13 01:30:45.111895 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:30:45.111913 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:30:45.111931 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:30:45.111949 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:30:45.111968 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:30:45.111987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:30:45.112006 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:30:45.112024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:30:45.112043 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:30:45.112066 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:30:45.112085 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:30:45.112103 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:30:45.112122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:30:45.112141 kernel: landlock: Up and running. Dec 13 01:30:45.112160 kernel: SELinux: Initializing. Dec 13 01:30:45.112179 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.112198 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.112217 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:30:45.112239 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:45.112258 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:45.112277 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:45.112293 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:30:45.112309 kernel: signal: max sigframe size: 1776 Dec 13 01:30:45.112328 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:30:45.112348 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:30:45.112382 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:30:45.112402 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:30:45.112426 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:30:45.112454 kernel: .... node #0, CPUs: #1 Dec 13 01:30:45.112476 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:30:45.112496 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:30:45.112516 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:30:45.112536 kernel: smpboot: Max logical packages: 1 Dec 13 01:30:45.112556 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:30:45.112576 kernel: devtmpfs: initialized Dec 13 01:30:45.112599 kernel: x86/mm: Memory block size: 128MB Dec 13 01:30:45.112617 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:30:45.112636 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:30:45.112654 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:30:45.112673 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:30:45.112690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:30:45.112708 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:30:45.112726 kernel: audit: type=2000 audit(1734053443.418:1): state=initialized audit_enabled=0 res=1 Dec 13 01:30:45.112744 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:30:45.112767 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:30:45.112785 kernel: cpuidle: using governor menu Dec 13 01:30:45.112803 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:30:45.112820 kernel: dca service started, version 1.12.1 Dec 13 01:30:45.112839 kernel: PCI: Using configuration type 1 for base access Dec 13 01:30:45.112858 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:30:45.112876 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:30:45.112894 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:30:45.112913 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:30:45.112935 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:30:45.112953 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:30:45.112971 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:30:45.112990 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:30:45.113009 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:30:45.113027 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:30:45.113046 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:30:45.113065 kernel: ACPI: Interpreter enabled Dec 13 01:30:45.113084 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:30:45.113106 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:30:45.113125 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:30:45.113144 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:30:45.113163 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:30:45.113182 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:30:45.113491 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:30:45.113696 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:30:45.113880 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:30:45.113911 kernel: PCI host bridge to bus 0000:00 Dec 13 01:30:45.114090 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:30:45.114258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:30:45.114454 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:30:45.114622 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:30:45.114788 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:30:45.114999 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:30:45.115201 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:30:45.115419 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:30:45.115628 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:30:45.115825 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:30:45.116018 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:30:45.116213 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:30:45.116510 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:30:45.116718 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:30:45.116907 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:30:45.117107 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:30:45.117294 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:30:45.117521 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:30:45.117554 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:30:45.117575 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:30:45.117595 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:30:45.117615 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:30:45.117635 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:30:45.117655 kernel: iommu: Default domain type: Translated Dec 13 01:30:45.117675 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:30:45.117694 kernel: efivars: Registered efivars operations Dec 13 01:30:45.117713 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:30:45.117738 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:30:45.117758 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:30:45.117777 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:30:45.117797 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:30:45.117817 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:30:45.117836 kernel: vgaarb: loaded Dec 13 01:30:45.117856 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:30:45.117876 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:30:45.117895 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:30:45.117919 kernel: pnp: PnP ACPI init Dec 13 01:30:45.117939 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:30:45.117960 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:30:45.117980 kernel: NET: Registered PF_INET protocol family Dec 13 01:30:45.117999 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:30:45.118019 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:30:45.118039 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:30:45.118059 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:30:45.118079 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:30:45.118103 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:30:45.118123 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.118143 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.118163 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:30:45.118183 kernel: NET: Registered PF_XDP protocol family Dec 13 01:30:45.118418 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:30:45.118599 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:30:45.118766 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:30:45.118937 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:30:45.119157 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:30:45.119185 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:30:45.119204 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:30:45.119224 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:30:45.119243 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:30:45.119262 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:30:45.119280 kernel: clocksource: Switched to clocksource tsc Dec 13 01:30:45.119306 kernel: Initialise system trusted keyrings Dec 13 01:30:45.119325 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:30:45.119343 kernel: Key type asymmetric registered Dec 13 01:30:45.119389 kernel: Asymmetric key parser 'x509' registered Dec 13 01:30:45.119408 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:30:45.119428 kernel: io scheduler mq-deadline registered Dec 13 01:30:45.119456 kernel: io scheduler kyber registered Dec 13 01:30:45.119475 kernel: io scheduler bfq registered Dec 13 01:30:45.119494 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:30:45.119519 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:30:45.119712 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:30:45.119737 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:30:45.119915 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:30:45.119939 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:30:45.120115 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:30:45.120138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:30:45.120157 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:30:45.120176 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:30:45.120200 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:30:45.120219 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:30:45.120464 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:30:45.120490 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:30:45.120509 kernel: i8042: Warning: Keylock active Dec 13 01:30:45.120527 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:30:45.120546 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:30:45.120733 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:30:45.120906 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:30:45.121071 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:30:44 UTC (1734053444) Dec 13 01:30:45.121234 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:30:45.121256 kernel: intel_pstate: CPU model not supported Dec 13 01:30:45.121275 kernel: pstore: Using crash dump compression: deflate Dec 13 01:30:45.121294 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:30:45.121313 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:30:45.121331 kernel: Segment Routing with IPv6 Dec 13 01:30:45.121378 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:30:45.121398 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:30:45.121417 kernel: Key type dns_resolver registered Dec 13 01:30:45.121435 kernel: IPI shorthand broadcast: enabled Dec 13 01:30:45.121461 kernel: sched_clock: Marking stable (871005404, 156500473)->(1055197798, -27691921) Dec 13 01:30:45.121480 kernel: registered taskstats version 1 Dec 13 01:30:45.121499 kernel: Loading compiled-in X.509 certificates Dec 13 01:30:45.121517 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:30:45.121535 kernel: Key type .fscrypt registered Dec 13 01:30:45.121558 kernel: Key type fscrypt-provisioning registered Dec 13 01:30:45.121577 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:30:45.121595 kernel: ima: No architecture policies found Dec 13 01:30:45.121614 kernel: clk: Disabling unused clocks Dec 13 01:30:45.121632 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:30:45.121657 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:30:45.121676 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:30:45.121694 kernel: Run /init as init process Dec 13 01:30:45.121717 kernel: with arguments: Dec 13 01:30:45.121736 kernel: /init Dec 13 01:30:45.121754 kernel: with environment: Dec 13 01:30:45.121772 kernel: HOME=/ Dec 13 01:30:45.121790 kernel: TERM=linux Dec 13 01:30:45.121809 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:30:45.121827 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:30:45.121850 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:30:45.121876 systemd[1]: Detected virtualization google. Dec 13 01:30:45.121896 systemd[1]: Detected architecture x86-64. Dec 13 01:30:45.121915 systemd[1]: Running in initrd. Dec 13 01:30:45.121934 systemd[1]: No hostname configured, using default hostname. Dec 13 01:30:45.121953 systemd[1]: Hostname set to . Dec 13 01:30:45.121974 systemd[1]: Initializing machine ID from random generator. Dec 13 01:30:45.121993 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:30:45.122012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:45.122037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:45.122058 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:30:45.122078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:30:45.122097 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:30:45.122117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:30:45.122140 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:30:45.122160 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:30:45.122183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:45.122204 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:45.122243 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:30:45.122267 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:30:45.122286 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:30:45.122306 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:30:45.122331 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:30:45.122388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:30:45.122411 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:30:45.122431 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:30:45.122459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:45.122479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:45.122500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:45.122520 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:30:45.122546 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:30:45.122567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:30:45.122587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:30:45.122608 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:30:45.122629 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:30:45.122649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:30:45.122669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:45.122730 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:30:45.122778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:30:45.122799 systemd-journald[183]: Journal started Dec 13 01:30:45.122838 systemd-journald[183]: Runtime Journal (/run/log/journal/234773f1d5ea464cacefffe4627a1fa3) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:30:45.125396 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:30:45.128706 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:30:45.131642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:45.141824 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:30:45.156633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:30:45.166668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:30:45.175416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:30:45.177261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:45.183529 kernel: Bridge firewalling registered Dec 13 01:30:45.182723 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:30:45.185561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:45.197029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:45.202742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:30:45.203274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:45.216857 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:45.232585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:30:45.248710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:45.256607 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:30:45.260156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:45.269947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:45.283570 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:30:45.309621 systemd-resolved[215]: Positive Trust Anchors: Dec 13 01:30:45.309647 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:30:45.309706 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:30:45.331106 dracut-cmdline[219]: dracut-dracut-053 Dec 13 01:30:45.331106 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:45.316100 systemd-resolved[215]: Defaulting to hostname 'linux'. Dec 13 01:30:45.317851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:30:45.335642 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:45.418398 kernel: SCSI subsystem initialized Dec 13 01:30:45.428405 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:30:45.440398 kernel: iscsi: registered transport (tcp) Dec 13 01:30:45.463510 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:30:45.463598 kernel: QLogic iSCSI HBA Driver Dec 13 01:30:45.517022 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:30:45.531595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:30:45.572734 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:30:45.572822 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:30:45.572851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:30:45.618399 kernel: raid6: avx2x4 gen() 18306 MB/s Dec 13 01:30:45.635390 kernel: raid6: avx2x2 gen() 18256 MB/s Dec 13 01:30:45.652923 kernel: raid6: avx2x1 gen() 14176 MB/s Dec 13 01:30:45.652962 kernel: raid6: using algorithm avx2x4 gen() 18306 MB/s Dec 13 01:30:45.670909 kernel: raid6: .... xor() 7980 MB/s, rmw enabled Dec 13 01:30:45.670965 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:30:45.694394 kernel: xor: automatically using best checksumming function avx Dec 13 01:30:45.865395 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:30:45.879077 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:30:45.891589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:45.907290 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 01:30:45.914518 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:45.924593 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:30:45.968165 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Dec 13 01:30:46.005943 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:30:46.012642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:30:46.103061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:46.116771 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:30:46.148418 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:30:46.160219 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:30:46.164477 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:46.168511 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:30:46.182450 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:30:46.207397 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:30:46.214386 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:30:46.235802 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:30:46.251377 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:30:46.275379 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:30:46.275448 kernel: AES CTR mode by8 optimization enabled Dec 13 01:30:46.333563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:30:46.333954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:46.344771 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:46.346612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:30:46.347201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:46.360195 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:30:46.377763 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:30:46.378024 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:30:46.378248 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:30:46.378491 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:30:46.378728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:30:46.378756 kernel: GPT:17805311 != 25165823 Dec 13 01:30:46.378777 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:30:46.378799 kernel: GPT:17805311 != 25165823 Dec 13 01:30:46.378830 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:30:46.378850 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:46.378873 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:30:46.366522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:46.378781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:46.409106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:46.416131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:46.454873 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (457) Dec 13 01:30:46.462407 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (465) Dec 13 01:30:46.475825 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:30:46.489317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:30:46.489944 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:46.506838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:30:46.513178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:30:46.513467 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:30:46.531609 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:30:46.544466 disk-uuid[548]: Primary Header is updated. Dec 13 01:30:46.544466 disk-uuid[548]: Secondary Entries is updated. Dec 13 01:30:46.544466 disk-uuid[548]: Secondary Header is updated. Dec 13 01:30:46.558383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:46.582399 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:46.608489 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:47.604049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:47.604139 disk-uuid[549]: The operation has completed successfully. Dec 13 01:30:47.682477 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:30:47.682645 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:30:47.706630 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:30:47.722727 sh[566]: Success Dec 13 01:30:47.745504 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:30:47.835334 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:30:47.861499 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:30:47.864912 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:30:47.923395 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:30:47.923504 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:47.940265 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:30:47.940342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:30:47.947097 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:30:47.987414 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:30:47.993293 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:30:47.994263 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:30:48.000678 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:30:48.011562 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:30:48.072600 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:48.072681 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:48.072708 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:30:48.096614 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:30:48.096704 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:30:48.110346 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:30:48.128547 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:48.136907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:30:48.165650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:30:48.233188 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:30:48.252597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:30:48.328852 systemd-networkd[749]: lo: Link UP Dec 13 01:30:48.328867 systemd-networkd[749]: lo: Gained carrier Dec 13 01:30:48.340309 systemd-networkd[749]: Enumeration completed Dec 13 01:30:48.340923 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:48.340929 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:30:48.384946 ignition[683]: Ignition 2.19.0 Dec 13 01:30:48.344414 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:30:48.384956 ignition[683]: Stage: fetch-offline Dec 13 01:30:48.353739 systemd-networkd[749]: eth0: Link UP Dec 13 01:30:48.385008 ignition[683]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.353747 systemd-networkd[749]: eth0: Gained carrier Dec 13 01:30:48.385019 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.353763 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:48.385148 ignition[683]: parsed url from cmdline: "" Dec 13 01:30:48.368587 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.13/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:30:48.385155 ignition[683]: no config URL provided Dec 13 01:30:48.387859 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:30:48.385164 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:30:48.411147 systemd[1]: Reached target network.target - Network. Dec 13 01:30:48.385178 ignition[683]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:30:48.430606 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:30:48.385186 ignition[683]: failed to fetch config: resource requires networking Dec 13 01:30:48.483161 unknown[759]: fetched base config from "system" Dec 13 01:30:48.385710 ignition[683]: Ignition finished successfully Dec 13 01:30:48.483174 unknown[759]: fetched base config from "system" Dec 13 01:30:48.471196 ignition[759]: Ignition 2.19.0 Dec 13 01:30:48.483184 unknown[759]: fetched user config from "gcp" Dec 13 01:30:48.471204 ignition[759]: Stage: fetch Dec 13 01:30:48.485336 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:30:48.471460 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.515597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:30:48.471473 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.565971 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:30:48.471595 ignition[759]: parsed url from cmdline: "" Dec 13 01:30:48.590667 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:30:48.471602 ignition[759]: no config URL provided Dec 13 01:30:48.629657 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:30:48.471611 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:30:48.645254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:30:48.471627 ignition[759]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:30:48.650698 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:30:48.471648 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:30:48.666702 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:30:48.476830 ignition[759]: GET result: OK Dec 13 01:30:48.681706 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:30:48.476907 ignition[759]: parsing config with SHA512: dffc0ccc835ff7ca87e0c30d77e644fa3ce8a7245a94799728fde2a6fb42bc8dc0c4637e350171539b83c110874c49e981ad94d3ed45b77ec1f9b399747e4918 Dec 13 01:30:48.698750 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:30:48.483597 ignition[759]: fetch: fetch complete Dec 13 01:30:48.730665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:30:48.483604 ignition[759]: fetch: fetch passed Dec 13 01:30:48.483666 ignition[759]: Ignition finished successfully Dec 13 01:30:48.547894 ignition[766]: Ignition 2.19.0 Dec 13 01:30:48.547904 ignition[766]: Stage: kargs Dec 13 01:30:48.548103 ignition[766]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.548116 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.549085 ignition[766]: kargs: kargs passed Dec 13 01:30:48.549146 ignition[766]: Ignition finished successfully Dec 13 01:30:48.627113 ignition[772]: Ignition 2.19.0 Dec 13 01:30:48.627123 ignition[772]: Stage: disks Dec 13 01:30:48.627373 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.627393 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.628504 ignition[772]: disks: disks passed Dec 13 01:30:48.628563 ignition[772]: Ignition finished successfully Dec 13 01:30:48.795148 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:30:48.947517 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:30:48.953570 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:30:49.090395 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:30:49.091152 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:30:49.106178 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:30:49.140509 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:30:49.163154 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:30:49.200544 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Dec 13 01:30:49.200593 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:49.200618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:49.200640 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:30:49.163945 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:30:49.228597 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:30:49.228643 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:30:49.164030 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:30:49.164069 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:30:49.241672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:30:49.265819 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:30:49.289575 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:30:49.424845 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:30:49.435742 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:30:49.446622 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:30:49.456529 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:30:49.599476 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:30:49.604517 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:30:49.645537 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:49.632606 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:30:49.661536 systemd-networkd[749]: eth0: Gained IPv6LL Dec 13 01:30:49.663743 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:30:49.711434 ignition[904]: INFO : Ignition 2.19.0 Dec 13 01:30:49.711434 ignition[904]: INFO : Stage: mount Dec 13 01:30:49.711434 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:49.711434 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:49.754548 ignition[904]: INFO : mount: mount passed Dec 13 01:30:49.754548 ignition[904]: INFO : Ignition finished successfully Dec 13 01:30:49.715624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:30:49.738998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:30:49.760526 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:30:49.799630 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:30:49.859626 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Dec 13 01:30:49.859662 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:49.859678 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:49.859706 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:30:49.872600 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:30:49.872695 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:30:49.876274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:30:49.913566 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:30:49.913566 ignition[934]: INFO : Stage: files Dec 13 01:30:49.928893 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:49.928893 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:49.928893 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:30:49.928893 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:30:49.926044 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 01:30:50.029525 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:30:50.029525 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:30:50.079641 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:30:50.281578 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:30:50.281578 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:30:50.602481 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:30:51.165696 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:51.165696 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:30:51.206659 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:30:51.206659 ignition[934]: INFO : files: files passed Dec 13 01:30:51.206659 ignition[934]: INFO : Ignition finished successfully Dec 13 01:30:51.170289 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:30:51.201652 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:30:51.222593 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:30:51.308995 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:30:51.422590 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:51.422590 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:51.309144 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:30:51.473687 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:51.334914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:30:51.353801 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:30:51.376600 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:30:51.439960 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:30:51.440085 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:30:51.463440 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:30:51.483592 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:30:51.504696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:30:51.511567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:30:51.601415 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:30:51.621683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:30:51.657982 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:30:51.658126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:30:51.677463 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:51.698669 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:51.708793 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:30:51.728803 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:30:51.728893 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:30:51.772531 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:30:51.772788 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:30:51.789809 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:30:51.823544 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:30:51.823900 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:30:51.861548 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:30:51.861812 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:30:51.878789 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:30:51.899759 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:30:51.916734 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:30:51.933775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:30:51.933868 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:30:51.964827 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:51.974791 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:51.992742 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:30:51.992843 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:52.023562 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:30:52.023697 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:30:52.054633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:30:52.054749 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:30:52.073644 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:30:52.073741 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:30:52.099532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:30:52.103703 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:30:52.176552 ignition[988]: INFO : Ignition 2.19.0 Dec 13 01:30:52.176552 ignition[988]: INFO : Stage: umount Dec 13 01:30:52.176552 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:52.176552 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:52.176552 ignition[988]: INFO : umount: umount passed Dec 13 01:30:52.176552 ignition[988]: INFO : Ignition finished successfully Dec 13 01:30:52.103782 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:52.134532 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:30:52.166646 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:30:52.166749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:52.183742 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:30:52.183812 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:30:52.246219 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:30:52.247022 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:30:52.247135 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:30:52.265969 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:30:52.266105 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:30:52.285099 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:30:52.285171 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:30:52.303698 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:30:52.303774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:30:52.313740 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:30:52.313806 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:30:52.330854 systemd[1]: Stopped target network.target - Network. Dec 13 01:30:52.347765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:30:52.347854 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:30:52.362769 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:30:52.388627 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:30:52.392502 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:52.396686 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:30:52.414697 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:30:52.429790 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:30:52.429853 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:30:52.445772 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:30:52.445830 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:30:52.462777 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:30:52.462870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:30:52.479765 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:30:52.479841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:30:52.496788 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:30:52.496860 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:30:52.513996 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:30:52.518433 systemd-networkd[749]: eth0: DHCPv6 lease lost Dec 13 01:30:52.540743 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:30:52.559963 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:30:52.560105 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:30:52.580973 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:30:52.581344 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:30:52.601024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:30:52.601097 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:52.612496 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:30:52.633496 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:30:52.633615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:30:52.644613 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:30:52.644699 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:53.079525 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:30:52.667610 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:30:52.667713 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:52.685614 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:30:52.685713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:52.704785 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:52.725087 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:30:52.725262 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:52.750649 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:30:52.750719 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:52.771649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:30:52.771719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:52.790642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:30:52.790740 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:30:52.820560 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:30:52.820687 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:30:52.852586 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:30:52.852723 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:52.889630 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:30:52.891715 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:30:52.891803 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:52.918740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:30:52.918811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:52.950117 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:30:52.950254 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:30:52.967896 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:30:52.968044 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:30:52.990077 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:30:53.011622 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:30:53.039661 systemd[1]: Switching root. Dec 13 01:30:53.366523 systemd-journald[183]: Journal stopped Dec 13 01:30:45.109167 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:30:45.109210 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:45.109229 kernel: BIOS-provided physical RAM map: Dec 13 01:30:45.109242 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:30:45.109255 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:30:45.109268 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:30:45.109284 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:30:45.109302 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:30:45.109316 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:30:45.109330 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:30:45.109344 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:30:45.109376 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:30:45.109391 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:30:45.109406 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:30:45.109428 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:30:45.109456 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:30:45.109473 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:30:45.109490 kernel: NX (Execute Disable) protection: active Dec 13 01:30:45.109506 kernel: APIC: Static calls initialized Dec 13 01:30:45.109523 kernel: efi: EFI v2.7 by EDK II Dec 13 01:30:45.109540 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:30:45.109556 kernel: SMBIOS 2.4 present. Dec 13 01:30:45.109573 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:30:45.109589 kernel: Hypervisor detected: KVM Dec 13 01:30:45.109609 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:30:45.109626 kernel: kvm-clock: using sched offset of 12392448987 cycles Dec 13 01:30:45.109644 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:30:45.109661 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:30:45.109679 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:30:45.109696 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:30:45.109712 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:30:45.109728 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:30:45.109744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:30:45.109764 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:30:45.109781 kernel: Using GB pages for direct mapping Dec 13 01:30:45.109797 kernel: Secure boot disabled Dec 13 01:30:45.109815 kernel: ACPI: Early table checksum verification disabled Dec 13 01:30:45.109830 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:30:45.109847 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:30:45.109864 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:30:45.109888 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:30:45.109907 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:30:45.109925 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:30:45.109943 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:30:45.109961 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:30:45.109978 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:30:45.109996 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:30:45.110018 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:30:45.110036 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:30:45.110055 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:30:45.110072 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:30:45.110090 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:30:45.110107 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:30:45.110126 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:30:45.110144 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:30:45.110170 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:30:45.110192 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:30:45.110210 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:30:45.110229 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:30:45.110247 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:30:45.110265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:30:45.110282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:30:45.110300 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:30:45.110318 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:30:45.110337 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:30:45.110382 kernel: Zone ranges: Dec 13 01:30:45.110401 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:30:45.110420 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:30:45.110438 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:30:45.110463 kernel: Movable zone start for each node Dec 13 01:30:45.110481 kernel: Early memory node ranges Dec 13 01:30:45.110499 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:30:45.110517 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:30:45.110535 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:30:45.110558 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:30:45.110576 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:30:45.110594 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:30:45.110613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:30:45.110631 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:30:45.110649 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:30:45.110668 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:30:45.110686 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:30:45.110703 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:30:45.110726 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:30:45.110744 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:30:45.110761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:30:45.110780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:30:45.110798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:30:45.110817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:30:45.110835 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:30:45.110854 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:30:45.110872 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:30:45.110894 kernel: Booting paravirtualized kernel on KVM Dec 13 01:30:45.110912 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:30:45.110931 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:30:45.110949 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:30:45.110967 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:30:45.110985 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:30:45.111002 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:30:45.111020 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:30:45.111040 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:45.111062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:30:45.111080 kernel: random: crng init done Dec 13 01:30:45.111098 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:30:45.111117 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:30:45.111135 kernel: Fallback order for Node 0: 0 Dec 13 01:30:45.111153 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:30:45.111171 kernel: Policy zone: Normal Dec 13 01:30:45.111189 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:30:45.111209 kernel: software IO TLB: area num 2. Dec 13 01:30:45.111227 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Dec 13 01:30:45.111245 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:30:45.111263 kernel: Kernel/User page tables isolation: enabled Dec 13 01:30:45.111280 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:30:45.111298 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:30:45.111315 kernel: Dynamic Preempt: voluntary Dec 13 01:30:45.111333 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:30:45.111366 kernel: rcu: RCU event tracing is enabled. Dec 13 01:30:45.111401 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:30:45.111418 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:30:45.111436 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:30:45.111466 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:30:45.111483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:30:45.111501 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:30:45.111519 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:30:45.111536 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:30:45.111554 kernel: Console: colour dummy device 80x25 Dec 13 01:30:45.111576 kernel: printk: console [ttyS0] enabled Dec 13 01:30:45.111594 kernel: ACPI: Core revision 20230628 Dec 13 01:30:45.111612 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:30:45.111629 kernel: x2apic enabled Dec 13 01:30:45.111648 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:30:45.111665 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:30:45.111684 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:30:45.111702 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:30:45.111724 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:30:45.111742 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:30:45.111761 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:30:45.111779 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:30:45.111798 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:30:45.111816 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:30:45.111835 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:30:45.111853 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:30:45.111872 kernel: RETBleed: Mitigation: IBRS Dec 13 01:30:45.111895 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:30:45.111913 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:30:45.111931 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:30:45.111949 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:30:45.111968 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:30:45.111987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:30:45.112006 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:30:45.112024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:30:45.112043 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:30:45.112066 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:30:45.112085 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:30:45.112103 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:30:45.112122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:30:45.112141 kernel: landlock: Up and running. Dec 13 01:30:45.112160 kernel: SELinux: Initializing. Dec 13 01:30:45.112179 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.112198 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.112217 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:30:45.112239 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:45.112258 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:45.112277 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:45.112293 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:30:45.112309 kernel: signal: max sigframe size: 1776 Dec 13 01:30:45.112328 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:30:45.112348 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:30:45.112382 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:30:45.112402 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:30:45.112426 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:30:45.112454 kernel: .... node #0, CPUs: #1 Dec 13 01:30:45.112476 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:30:45.112496 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:30:45.112516 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:30:45.112536 kernel: smpboot: Max logical packages: 1 Dec 13 01:30:45.112556 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:30:45.112576 kernel: devtmpfs: initialized Dec 13 01:30:45.112599 kernel: x86/mm: Memory block size: 128MB Dec 13 01:30:45.112617 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:30:45.112636 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:30:45.112654 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:30:45.112673 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:30:45.112690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:30:45.112708 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:30:45.112726 kernel: audit: type=2000 audit(1734053443.418:1): state=initialized audit_enabled=0 res=1 Dec 13 01:30:45.112744 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:30:45.112767 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:30:45.112785 kernel: cpuidle: using governor menu Dec 13 01:30:45.112803 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:30:45.112820 kernel: dca service started, version 1.12.1 Dec 13 01:30:45.112839 kernel: PCI: Using configuration type 1 for base access Dec 13 01:30:45.112858 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:30:45.112876 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:30:45.112894 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:30:45.112913 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:30:45.112935 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:30:45.112953 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:30:45.112971 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:30:45.112990 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:30:45.113009 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:30:45.113027 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:30:45.113046 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:30:45.113065 kernel: ACPI: Interpreter enabled Dec 13 01:30:45.113084 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:30:45.113106 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:30:45.113125 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:30:45.113144 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:30:45.113163 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:30:45.113182 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:30:45.113491 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:30:45.113696 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:30:45.113880 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:30:45.113911 kernel: PCI host bridge to bus 0000:00 Dec 13 01:30:45.114090 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:30:45.114258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:30:45.114454 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:30:45.114622 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:30:45.114788 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:30:45.114999 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:30:45.115201 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:30:45.115419 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:30:45.115628 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:30:45.115825 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:30:45.116018 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:30:45.116213 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:30:45.116510 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:30:45.116718 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:30:45.116907 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:30:45.117107 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:30:45.117294 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:30:45.117521 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:30:45.117554 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:30:45.117575 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:30:45.117595 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:30:45.117615 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:30:45.117635 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:30:45.117655 kernel: iommu: Default domain type: Translated Dec 13 01:30:45.117675 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:30:45.117694 kernel: efivars: Registered efivars operations Dec 13 01:30:45.117713 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:30:45.117738 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:30:45.117758 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:30:45.117777 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:30:45.117797 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:30:45.117817 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:30:45.117836 kernel: vgaarb: loaded Dec 13 01:30:45.117856 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:30:45.117876 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:30:45.117895 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:30:45.117919 kernel: pnp: PnP ACPI init Dec 13 01:30:45.117939 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:30:45.117960 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:30:45.117980 kernel: NET: Registered PF_INET protocol family Dec 13 01:30:45.117999 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:30:45.118019 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:30:45.118039 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:30:45.118059 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:30:45.118079 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:30:45.118103 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:30:45.118123 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.118143 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:30:45.118163 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:30:45.118183 kernel: NET: Registered PF_XDP protocol family Dec 13 01:30:45.118418 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:30:45.118599 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:30:45.118766 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:30:45.118937 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:30:45.119157 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:30:45.119185 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:30:45.119204 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:30:45.119224 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:30:45.119243 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:30:45.119262 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:30:45.119280 kernel: clocksource: Switched to clocksource tsc Dec 13 01:30:45.119306 kernel: Initialise system trusted keyrings Dec 13 01:30:45.119325 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:30:45.119343 kernel: Key type asymmetric registered Dec 13 01:30:45.119389 kernel: Asymmetric key parser 'x509' registered Dec 13 01:30:45.119408 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:30:45.119428 kernel: io scheduler mq-deadline registered Dec 13 01:30:45.119456 kernel: io scheduler kyber registered Dec 13 01:30:45.119475 kernel: io scheduler bfq registered Dec 13 01:30:45.119494 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:30:45.119519 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:30:45.119712 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:30:45.119737 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:30:45.119915 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:30:45.119939 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:30:45.120115 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:30:45.120138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:30:45.120157 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:30:45.120176 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:30:45.120200 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:30:45.120219 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:30:45.120464 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:30:45.120490 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:30:45.120509 kernel: i8042: Warning: Keylock active Dec 13 01:30:45.120527 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:30:45.120546 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:30:45.120733 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:30:45.120906 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:30:45.121071 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:30:44 UTC (1734053444) Dec 13 01:30:45.121234 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:30:45.121256 kernel: intel_pstate: CPU model not supported Dec 13 01:30:45.121275 kernel: pstore: Using crash dump compression: deflate Dec 13 01:30:45.121294 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:30:45.121313 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:30:45.121331 kernel: Segment Routing with IPv6 Dec 13 01:30:45.121378 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:30:45.121398 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:30:45.121417 kernel: Key type dns_resolver registered Dec 13 01:30:45.121435 kernel: IPI shorthand broadcast: enabled Dec 13 01:30:45.121461 kernel: sched_clock: Marking stable (871005404, 156500473)->(1055197798, -27691921) Dec 13 01:30:45.121480 kernel: registered taskstats version 1 Dec 13 01:30:45.121499 kernel: Loading compiled-in X.509 certificates Dec 13 01:30:45.121517 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:30:45.121535 kernel: Key type .fscrypt registered Dec 13 01:30:45.121558 kernel: Key type fscrypt-provisioning registered Dec 13 01:30:45.121577 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:30:45.121595 kernel: ima: No architecture policies found Dec 13 01:30:45.121614 kernel: clk: Disabling unused clocks Dec 13 01:30:45.121632 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:30:45.121657 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:30:45.121676 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:30:45.121694 kernel: Run /init as init process Dec 13 01:30:45.121717 kernel: with arguments: Dec 13 01:30:45.121736 kernel: /init Dec 13 01:30:45.121754 kernel: with environment: Dec 13 01:30:45.121772 kernel: HOME=/ Dec 13 01:30:45.121790 kernel: TERM=linux Dec 13 01:30:45.121809 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:30:45.121827 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:30:45.121850 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:30:45.121876 systemd[1]: Detected virtualization google. Dec 13 01:30:45.121896 systemd[1]: Detected architecture x86-64. Dec 13 01:30:45.121915 systemd[1]: Running in initrd. Dec 13 01:30:45.121934 systemd[1]: No hostname configured, using default hostname. Dec 13 01:30:45.121953 systemd[1]: Hostname set to . Dec 13 01:30:45.121974 systemd[1]: Initializing machine ID from random generator. Dec 13 01:30:45.121993 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:30:45.122012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:45.122037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:45.122058 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:30:45.122078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:30:45.122097 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:30:45.122117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:30:45.122140 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:30:45.122160 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:30:45.122183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:45.122204 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:45.122243 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:30:45.122267 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:30:45.122286 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:30:45.122306 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:30:45.122331 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:30:45.122388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:30:45.122411 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:30:45.122431 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:30:45.122459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:45.122479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:45.122500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:45.122520 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:30:45.122546 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:30:45.122567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:30:45.122587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:30:45.122608 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:30:45.122629 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:30:45.122649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:30:45.122669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:45.122730 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:30:45.122778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:30:45.122799 systemd-journald[183]: Journal started Dec 13 01:30:45.122838 systemd-journald[183]: Runtime Journal (/run/log/journal/234773f1d5ea464cacefffe4627a1fa3) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:30:45.125396 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:30:45.128706 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:30:45.131642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:45.141824 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:30:45.156633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:30:45.166668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:30:45.175416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:30:45.177261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:45.183529 kernel: Bridge firewalling registered Dec 13 01:30:45.182723 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:30:45.185561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:45.197029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:45.202742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:30:45.203274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:45.216857 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:45.232585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:30:45.248710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:45.256607 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:30:45.260156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:45.269947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:45.283570 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:30:45.309621 systemd-resolved[215]: Positive Trust Anchors: Dec 13 01:30:45.309647 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:30:45.309706 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:30:45.331106 dracut-cmdline[219]: dracut-dracut-053 Dec 13 01:30:45.331106 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:45.316100 systemd-resolved[215]: Defaulting to hostname 'linux'. Dec 13 01:30:45.317851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:30:45.335642 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:45.418398 kernel: SCSI subsystem initialized Dec 13 01:30:45.428405 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:30:45.440398 kernel: iscsi: registered transport (tcp) Dec 13 01:30:45.463510 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:30:45.463598 kernel: QLogic iSCSI HBA Driver Dec 13 01:30:45.517022 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:30:45.531595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:30:45.572734 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:30:45.572822 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:30:45.572851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:30:45.618399 kernel: raid6: avx2x4 gen() 18306 MB/s Dec 13 01:30:45.635390 kernel: raid6: avx2x2 gen() 18256 MB/s Dec 13 01:30:45.652923 kernel: raid6: avx2x1 gen() 14176 MB/s Dec 13 01:30:45.652962 kernel: raid6: using algorithm avx2x4 gen() 18306 MB/s Dec 13 01:30:45.670909 kernel: raid6: .... xor() 7980 MB/s, rmw enabled Dec 13 01:30:45.670965 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:30:45.694394 kernel: xor: automatically using best checksumming function avx Dec 13 01:30:45.865395 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:30:45.879077 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:30:45.891589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:45.907290 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 01:30:45.914518 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:45.924593 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:30:45.968165 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Dec 13 01:30:46.005943 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:30:46.012642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:30:46.103061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:46.116771 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:30:46.148418 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:30:46.160219 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:30:46.164477 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:46.168511 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:30:46.182450 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:30:46.207397 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:30:46.214386 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:30:46.235802 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:30:46.251377 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:30:46.275379 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:30:46.275448 kernel: AES CTR mode by8 optimization enabled Dec 13 01:30:46.333563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:30:46.333954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:46.344771 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:46.346612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:30:46.347201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:46.360195 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:30:46.377763 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:30:46.378024 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:30:46.378248 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:30:46.378491 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:30:46.378728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:30:46.378756 kernel: GPT:17805311 != 25165823 Dec 13 01:30:46.378777 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:30:46.378799 kernel: GPT:17805311 != 25165823 Dec 13 01:30:46.378830 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:30:46.378850 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:46.378873 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:30:46.366522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:46.378781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:46.409106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:46.416131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:46.454873 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (457) Dec 13 01:30:46.462407 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (465) Dec 13 01:30:46.475825 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:30:46.489317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:30:46.489944 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:46.506838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:30:46.513178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:30:46.513467 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:30:46.531609 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:30:46.544466 disk-uuid[548]: Primary Header is updated. Dec 13 01:30:46.544466 disk-uuid[548]: Secondary Entries is updated. Dec 13 01:30:46.544466 disk-uuid[548]: Secondary Header is updated. Dec 13 01:30:46.558383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:46.582399 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:46.608489 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:47.604049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:30:47.604139 disk-uuid[549]: The operation has completed successfully. Dec 13 01:30:47.682477 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:30:47.682645 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:30:47.706630 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:30:47.722727 sh[566]: Success Dec 13 01:30:47.745504 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:30:47.835334 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:30:47.861499 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:30:47.864912 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:30:47.923395 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:30:47.923504 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:47.940265 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:30:47.940342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:30:47.947097 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:30:47.987414 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:30:47.993293 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:30:47.994263 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:30:48.000678 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:30:48.011562 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:30:48.072600 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:48.072681 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:48.072708 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:30:48.096614 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:30:48.096704 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:30:48.110346 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:30:48.128547 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:48.136907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:30:48.165650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:30:48.233188 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:30:48.252597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:30:48.328852 systemd-networkd[749]: lo: Link UP Dec 13 01:30:48.328867 systemd-networkd[749]: lo: Gained carrier Dec 13 01:30:48.340309 systemd-networkd[749]: Enumeration completed Dec 13 01:30:48.340923 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:48.340929 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:30:48.384946 ignition[683]: Ignition 2.19.0 Dec 13 01:30:48.344414 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:30:48.384956 ignition[683]: Stage: fetch-offline Dec 13 01:30:48.353739 systemd-networkd[749]: eth0: Link UP Dec 13 01:30:48.385008 ignition[683]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.353747 systemd-networkd[749]: eth0: Gained carrier Dec 13 01:30:48.385019 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.353763 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:48.385148 ignition[683]: parsed url from cmdline: "" Dec 13 01:30:48.368587 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.13/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:30:48.385155 ignition[683]: no config URL provided Dec 13 01:30:48.387859 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:30:48.385164 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:30:48.411147 systemd[1]: Reached target network.target - Network. Dec 13 01:30:48.385178 ignition[683]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:30:48.430606 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:30:48.385186 ignition[683]: failed to fetch config: resource requires networking Dec 13 01:30:48.483161 unknown[759]: fetched base config from "system" Dec 13 01:30:48.385710 ignition[683]: Ignition finished successfully Dec 13 01:30:48.483174 unknown[759]: fetched base config from "system" Dec 13 01:30:48.471196 ignition[759]: Ignition 2.19.0 Dec 13 01:30:48.483184 unknown[759]: fetched user config from "gcp" Dec 13 01:30:48.471204 ignition[759]: Stage: fetch Dec 13 01:30:48.485336 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:30:48.471460 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.515597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:30:48.471473 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.565971 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:30:48.471595 ignition[759]: parsed url from cmdline: "" Dec 13 01:30:48.590667 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:30:48.471602 ignition[759]: no config URL provided Dec 13 01:30:48.629657 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:30:48.471611 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:30:48.645254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:30:48.471627 ignition[759]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:30:48.650698 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:30:48.471648 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:30:48.666702 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:30:48.476830 ignition[759]: GET result: OK Dec 13 01:30:48.681706 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:30:48.476907 ignition[759]: parsing config with SHA512: dffc0ccc835ff7ca87e0c30d77e644fa3ce8a7245a94799728fde2a6fb42bc8dc0c4637e350171539b83c110874c49e981ad94d3ed45b77ec1f9b399747e4918 Dec 13 01:30:48.698750 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:30:48.483597 ignition[759]: fetch: fetch complete Dec 13 01:30:48.730665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:30:48.483604 ignition[759]: fetch: fetch passed Dec 13 01:30:48.483666 ignition[759]: Ignition finished successfully Dec 13 01:30:48.547894 ignition[766]: Ignition 2.19.0 Dec 13 01:30:48.547904 ignition[766]: Stage: kargs Dec 13 01:30:48.548103 ignition[766]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.548116 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.549085 ignition[766]: kargs: kargs passed Dec 13 01:30:48.549146 ignition[766]: Ignition finished successfully Dec 13 01:30:48.627113 ignition[772]: Ignition 2.19.0 Dec 13 01:30:48.627123 ignition[772]: Stage: disks Dec 13 01:30:48.627373 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:48.627393 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:48.628504 ignition[772]: disks: disks passed Dec 13 01:30:48.628563 ignition[772]: Ignition finished successfully Dec 13 01:30:48.795148 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:30:48.947517 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:30:48.953570 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:30:49.090395 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:30:49.091152 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:30:49.106178 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:30:49.140509 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:30:49.163154 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:30:49.200544 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Dec 13 01:30:49.200593 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:49.200618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:49.200640 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:30:49.163945 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:30:49.228597 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:30:49.228643 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:30:49.164030 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:30:49.164069 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:30:49.241672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:30:49.265819 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:30:49.289575 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:30:49.424845 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:30:49.435742 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:30:49.446622 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:30:49.456529 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:30:49.599476 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:30:49.604517 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:30:49.645537 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:49.632606 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:30:49.661536 systemd-networkd[749]: eth0: Gained IPv6LL Dec 13 01:30:49.663743 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:30:49.711434 ignition[904]: INFO : Ignition 2.19.0 Dec 13 01:30:49.711434 ignition[904]: INFO : Stage: mount Dec 13 01:30:49.711434 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:49.711434 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:49.754548 ignition[904]: INFO : mount: mount passed Dec 13 01:30:49.754548 ignition[904]: INFO : Ignition finished successfully Dec 13 01:30:49.715624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:30:49.738998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:30:49.760526 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:30:49.799630 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:30:49.859626 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Dec 13 01:30:49.859662 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:49.859678 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:49.859706 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:30:49.872600 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:30:49.872695 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:30:49.876274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:30:49.913566 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:30:49.913566 ignition[934]: INFO : Stage: files Dec 13 01:30:49.928893 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:49.928893 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:49.928893 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:30:49.928893 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:30:49.928893 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:30:49.926044 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 01:30:50.029525 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:30:50.029525 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:30:50.079641 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:30:50.281578 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:30:50.281578 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:50.313486 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:30:50.602481 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:30:51.165696 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:30:51.165696 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:30:51.206659 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:30:51.206659 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:30:51.206659 ignition[934]: INFO : files: files passed Dec 13 01:30:51.206659 ignition[934]: INFO : Ignition finished successfully Dec 13 01:30:51.170289 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:30:51.201652 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:30:51.222593 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:30:51.308995 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:30:51.422590 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:51.422590 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:51.309144 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:30:51.473687 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:51.334914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:30:51.353801 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:30:51.376600 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:30:51.439960 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:30:51.440085 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:30:51.463440 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:30:51.483592 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:30:51.504696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:30:51.511567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:30:51.601415 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:30:51.621683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:30:51.657982 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:30:51.658126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:30:51.677463 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:51.698669 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:51.708793 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:30:51.728803 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:30:51.728893 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:30:51.772531 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:30:51.772788 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:30:51.789809 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:30:51.823544 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:30:51.823900 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:30:51.861548 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:30:51.861812 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:30:51.878789 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:30:51.899759 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:30:51.916734 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:30:51.933775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:30:51.933868 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:30:51.964827 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:51.974791 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:51.992742 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:30:51.992843 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:52.023562 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:30:52.023697 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:30:52.054633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:30:52.054749 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:30:52.073644 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:30:52.073741 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:30:52.099532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:30:52.103703 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:30:52.176552 ignition[988]: INFO : Ignition 2.19.0 Dec 13 01:30:52.176552 ignition[988]: INFO : Stage: umount Dec 13 01:30:52.176552 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:52.176552 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:30:52.176552 ignition[988]: INFO : umount: umount passed Dec 13 01:30:52.176552 ignition[988]: INFO : Ignition finished successfully Dec 13 01:30:52.103782 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:52.134532 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:30:52.166646 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:30:52.166749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:52.183742 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:30:52.183812 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:30:52.246219 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:30:52.247022 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:30:52.247135 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:30:52.265969 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:30:52.266105 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:30:52.285099 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:30:52.285171 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:30:52.303698 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:30:52.303774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:30:52.313740 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:30:52.313806 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:30:52.330854 systemd[1]: Stopped target network.target - Network. Dec 13 01:30:52.347765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:30:52.347854 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:30:52.362769 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:30:52.388627 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:30:52.392502 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:52.396686 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:30:52.414697 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:30:52.429790 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:30:52.429853 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:30:52.445772 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:30:52.445830 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:30:52.462777 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:30:52.462870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:30:52.479765 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:30:52.479841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:30:52.496788 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:30:52.496860 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:30:52.513996 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:30:52.518433 systemd-networkd[749]: eth0: DHCPv6 lease lost Dec 13 01:30:52.540743 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:30:52.559963 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:30:52.560105 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:30:52.580973 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:30:52.581344 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:30:52.601024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:30:52.601097 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:52.612496 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:30:52.633496 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:30:52.633615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:30:52.644613 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:30:52.644699 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:53.079525 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:30:52.667610 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:30:52.667713 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:52.685614 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:30:52.685713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:52.704785 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:52.725087 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:30:52.725262 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:52.750649 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:30:52.750719 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:52.771649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:30:52.771719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:52.790642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:30:52.790740 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:30:52.820560 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:30:52.820687 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:30:52.852586 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:30:52.852723 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:52.889630 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:30:52.891715 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:30:52.891803 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:52.918740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:30:52.918811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:52.950117 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:30:52.950254 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:30:52.967896 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:30:52.968044 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:30:52.990077 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:30:53.011622 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:30:53.039661 systemd[1]: Switching root. Dec 13 01:30:53.366523 systemd-journald[183]: Journal stopped Dec 13 01:30:55.790289 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:30:55.790341 kernel: SELinux: policy capability open_perms=1 Dec 13 01:30:55.790384 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:30:55.790402 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:30:55.790420 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:30:55.790439 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:30:55.790459 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:30:55.790483 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:30:55.790501 kernel: audit: type=1403 audit(1734053453.682:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:30:55.790522 systemd[1]: Successfully loaded SELinux policy in 81.913ms. Dec 13 01:30:55.790544 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.339ms. Dec 13 01:30:55.790567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:30:55.790587 systemd[1]: Detected virtualization google. Dec 13 01:30:55.790606 systemd[1]: Detected architecture x86-64. Dec 13 01:30:55.790640 systemd[1]: Detected first boot. Dec 13 01:30:55.790662 systemd[1]: Initializing machine ID from random generator. Dec 13 01:30:55.790683 zram_generator::config[1028]: No configuration found. Dec 13 01:30:55.790707 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:30:55.790728 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:30:55.790752 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:30:55.790774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:30:55.790795 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:30:55.790816 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:30:55.790837 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:30:55.790860 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:30:55.790882 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:30:55.790907 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:30:55.790929 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:30:55.790950 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:30:55.790972 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:55.790993 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:55.791021 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:30:55.791043 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:30:55.791065 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:30:55.791091 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:30:55.791112 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:30:55.791133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:55.791154 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:30:55.791178 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:30:55.791199 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:30:55.791226 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:30:55.791249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:55.791271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:30:55.791297 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:30:55.791319 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:30:55.791341 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:30:55.791376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:30:55.791398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:55.791420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:55.791442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:55.791470 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:30:55.791492 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:30:55.791514 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:30:55.791537 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:30:55.791560 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:55.791586 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:30:55.791613 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:30:55.791643 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:30:55.791666 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:30:55.791691 systemd[1]: Reached target machines.target - Containers. Dec 13 01:30:55.791713 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:30:55.791737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:55.791760 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:30:55.791787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:30:55.791810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:30:55.791832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:30:55.791855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:55.791877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:30:55.791899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:30:55.791923 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:30:55.791945 kernel: ACPI: bus type drm_connector registered Dec 13 01:30:55.791970 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:30:55.791993 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:30:55.792015 kernel: fuse: init (API version 7.39) Dec 13 01:30:55.792035 kernel: loop: module loaded Dec 13 01:30:55.792055 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:30:55.792078 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:30:55.792100 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:30:55.792123 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:30:55.792175 systemd-journald[1115]: Collecting audit messages is disabled. Dec 13 01:30:55.792228 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:30:55.792252 systemd-journald[1115]: Journal started Dec 13 01:30:55.792298 systemd-journald[1115]: Runtime Journal (/run/log/journal/e20a27749c4841b38c73b4b2b4246cfb) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:30:54.590103 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:30:54.615180 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:30:54.615810 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:30:55.822489 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:30:55.834417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:30:55.859377 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:30:55.865400 systemd[1]: Stopped verity-setup.service. Dec 13 01:30:55.890413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:55.901433 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:30:55.912957 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:30:55.923925 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:30:55.934803 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:30:55.944794 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:30:55.955862 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:30:55.966800 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:30:55.978032 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:30:55.990062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:56.001947 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:30:56.002179 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:30:56.013883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:30:56.014106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:30:56.025988 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:30:56.026217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:30:56.036926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:56.037158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:56.048967 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:30:56.049198 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:30:56.059948 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:30:56.060183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:30:56.070931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:56.080902 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:30:56.092912 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:30:56.104872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:56.129690 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:30:56.147514 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:30:56.169506 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:30:56.179518 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:30:56.179591 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:30:56.190902 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:30:56.214641 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:30:56.227009 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:30:56.238748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:56.244055 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:30:56.255178 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:30:56.266625 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:30:56.275047 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:30:56.292796 systemd-journald[1115]: Time spent on flushing to /var/log/journal/e20a27749c4841b38c73b4b2b4246cfb is 103.258ms for 926 entries. Dec 13 01:30:56.292796 systemd-journald[1115]: System Journal (/var/log/journal/e20a27749c4841b38c73b4b2b4246cfb) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:30:56.435771 systemd-journald[1115]: Received client request to flush runtime journal. Dec 13 01:30:56.435888 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:30:56.284549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:30:56.296576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:56.321686 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:30:56.340857 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:30:56.360747 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:30:56.376488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:30:56.387729 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:30:56.404938 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:30:56.417023 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:30:56.429050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:56.440133 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:30:56.465971 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:30:56.489211 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:30:56.509075 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:30:56.520503 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:30:56.535842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:30:56.548733 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:30:56.551270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:30:56.559475 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:30:56.564263 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:30:56.610883 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Dec 13 01:30:56.610919 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Dec 13 01:30:56.624847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:56.675663 kernel: loop2: detected capacity change from 0 to 54824 Dec 13 01:30:56.779480 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:30:56.877724 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:30:56.927410 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:30:56.974256 kernel: loop6: detected capacity change from 0 to 54824 Dec 13 01:30:57.016395 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 01:30:57.070373 (sd-merge)[1171]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 13 01:30:57.071299 (sd-merge)[1171]: Merged extensions into '/usr'. Dec 13 01:30:57.082285 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:30:57.082690 systemd[1]: Reloading... Dec 13 01:30:57.228386 zram_generator::config[1193]: No configuration found. Dec 13 01:30:57.472593 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:30:57.522703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:57.625168 systemd[1]: Reloading finished in 541 ms. Dec 13 01:30:57.662044 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:30:57.673176 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:30:57.697659 systemd[1]: Starting ensure-sysext.service... Dec 13 01:30:57.711561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:30:57.723473 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:30:57.746885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:57.754965 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:30:57.755719 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:30:57.756947 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:30:57.757378 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 13 01:30:57.757507 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 13 01:30:57.760667 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:30:57.760697 systemd[1]: Reloading... Dec 13 01:30:57.765236 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:30:57.765256 systemd-tmpfiles[1238]: Skipping /boot Dec 13 01:30:57.789285 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:30:57.790311 systemd-tmpfiles[1238]: Skipping /boot Dec 13 01:30:57.814265 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Dec 13 01:30:57.893385 zram_generator::config[1266]: No configuration found. Dec 13 01:30:58.054972 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1277) Dec 13 01:30:58.055105 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1277) Dec 13 01:30:58.177438 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 01:30:58.191861 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:30:58.224406 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:30:58.231411 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:30:58.259177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:58.278420 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 13 01:30:58.278539 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:30:58.317384 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1284) Dec 13 01:30:58.327381 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:30:58.421086 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:30:58.421446 systemd[1]: Reloading finished in 660 ms. Dec 13 01:30:58.447381 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:30:58.450800 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:58.469061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:58.495557 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:30:58.518800 systemd[1]: Finished ensure-sysext.service. Dec 13 01:30:58.544967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:30:58.556691 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:58.568634 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:30:58.589612 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:30:58.600842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:58.608570 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:30:58.629724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:30:58.634048 lvm[1355]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:30:58.648546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:30:58.650893 augenrules[1361]: No rules Dec 13 01:30:58.663823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:58.679591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:30:58.698416 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:30:58.707672 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:58.713803 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:30:58.730598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:30:58.750228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:30:58.769891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:30:58.779514 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:30:58.796721 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:30:58.816530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:58.826539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:58.837754 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:30:58.848149 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:30:58.859952 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:30:58.860533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:30:58.860729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:30:58.861217 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:30:58.861495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:30:58.862013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:58.862236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:58.862722 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:30:58.862932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:30:58.867578 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:30:58.868686 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:30:58.881586 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:30:58.888109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:58.893882 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:30:58.896708 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 13 01:30:58.897395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:30:58.897518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:30:58.904652 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:30:58.912488 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:30:58.920721 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:30:58.920824 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:30:58.923627 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:30:58.963995 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:30:58.974485 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:30:59.000430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:59.012108 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 13 01:30:59.024823 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:30:59.117001 systemd-networkd[1372]: lo: Link UP Dec 13 01:30:59.117448 systemd-networkd[1372]: lo: Gained carrier Dec 13 01:30:59.120274 systemd-networkd[1372]: Enumeration completed Dec 13 01:30:59.120716 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:30:59.121643 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:59.121788 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:30:59.122619 systemd-networkd[1372]: eth0: Link UP Dec 13 01:30:59.122632 systemd-networkd[1372]: eth0: Gained carrier Dec 13 01:30:59.122661 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:59.127480 systemd-resolved[1373]: Positive Trust Anchors: Dec 13 01:30:59.127499 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:30:59.127560 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:30:59.133464 systemd-networkd[1372]: eth0: DHCPv4 address 10.128.0.13/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:30:59.134211 systemd-resolved[1373]: Defaulting to hostname 'linux'. Dec 13 01:30:59.137613 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:30:59.148725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:30:59.158754 systemd[1]: Reached target network.target - Network. Dec 13 01:30:59.167529 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:59.178580 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:30:59.188721 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:30:59.199632 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:30:59.210809 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:30:59.220714 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:30:59.231584 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:30:59.244566 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:30:59.244639 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:30:59.253569 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:30:59.264775 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:30:59.276218 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:30:59.297292 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:30:59.308295 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:30:59.318693 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:30:59.328538 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:30:59.337593 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:30:59.337637 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:30:59.344573 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:30:59.364651 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:30:59.392036 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:30:59.409134 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:30:59.427682 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:30:59.437512 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:30:59.441606 jq[1425]: false Dec 13 01:30:59.444186 coreos-metadata[1421]: Dec 13 01:30:59.444 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 01:30:59.446673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:30:59.451784 coreos-metadata[1421]: Dec 13 01:30:59.451 INFO Fetch successful Dec 13 01:30:59.451784 coreos-metadata[1421]: Dec 13 01:30:59.451 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 01:30:59.452582 coreos-metadata[1421]: Dec 13 01:30:59.452 INFO Fetch successful Dec 13 01:30:59.452688 coreos-metadata[1421]: Dec 13 01:30:59.452 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 01:30:59.453860 coreos-metadata[1421]: Dec 13 01:30:59.453 INFO Fetch successful Dec 13 01:30:59.453860 coreos-metadata[1421]: Dec 13 01:30:59.453 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 01:30:59.456452 coreos-metadata[1421]: Dec 13 01:30:59.454 INFO Fetch successful Dec 13 01:30:59.467505 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:30:59.477774 extend-filesystems[1426]: Found loop4 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found loop5 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found loop6 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found loop7 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda1 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda2 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda3 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found usr Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda4 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda6 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda7 Dec 13 01:30:59.483711 extend-filesystems[1426]: Found sda9 Dec 13 01:30:59.483711 extend-filesystems[1426]: Checking size of /dev/sda9 Dec 13 01:30:59.666556 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 01:30:59.666604 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 01:30:59.666625 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1305) Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: ---------------------------------------------------- Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: corporation. Support and training for ntp-4 are Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: available at https://www.nwtime.org/support Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: ---------------------------------------------------- Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: proto: precision = 0.105 usec (-23) Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: basedate set to 2024-11-30 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: gps base set to 2024-12-01 (week 2343) Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Listen normally on 3 eth0 10.128.0.13:123 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Listen normally on 4 lo [::1]:123 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: bind(21) AF_INET6 fe80::4001:aff:fe80:d%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:d%2#123 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:d%2 Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: Listening on routing socket on fd #21 for interface updates Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:59.666645 ntpd[1429]: 13 Dec 01:30:59 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:59.530714 dbus-daemon[1422]: [system] SELinux support is enabled Dec 13 01:30:59.488047 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:30:59.669190 extend-filesystems[1426]: Resized partition /dev/sda9 Dec 13 01:30:59.533799 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1372 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:30:59.511013 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:30:59.669857 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:30:59.669857 extend-filesystems[1444]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:30:59.669857 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 01:30:59.669857 extend-filesystems[1444]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 01:30:59.555010 ntpd[1429]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:30:59.542607 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:30:59.754418 extend-filesystems[1426]: Resized filesystem in /dev/sda9 Dec 13 01:30:59.555041 ntpd[1429]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:30:59.608462 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:30:59.555055 ntpd[1429]: ---------------------------------------------------- Dec 13 01:30:59.649115 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 01:30:59.555069 ntpd[1429]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:30:59.775451 update_engine[1454]: I20241213 01:30:59.734875 1454 main.cc:92] Flatcar Update Engine starting Dec 13 01:30:59.775451 update_engine[1454]: I20241213 01:30:59.742916 1454 update_check_scheduler.cc:74] Next update check in 8m43s Dec 13 01:30:59.650315 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:30:59.555082 ntpd[1429]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:30:59.776264 jq[1455]: true Dec 13 01:30:59.658646 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:30:59.555097 ntpd[1429]: corporation. Support and training for ntp-4 are Dec 13 01:30:59.689996 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:30:59.555110 ntpd[1429]: available at https://www.nwtime.org/support Dec 13 01:30:59.706922 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:30:59.555123 ntpd[1429]: ---------------------------------------------------- Dec 13 01:30:59.749896 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:30:59.560911 ntpd[1429]: proto: precision = 0.105 usec (-23) Dec 13 01:30:59.750145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:30:59.566263 ntpd[1429]: basedate set to 2024-11-30 Dec 13 01:30:59.750659 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:30:59.566290 ntpd[1429]: gps base set to 2024-12-01 (week 2343) Dec 13 01:30:59.750905 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:30:59.570318 ntpd[1429]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:30:59.764302 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:30:59.571073 ntpd[1429]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:30:59.766666 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:30:59.581978 ntpd[1429]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:30:59.775997 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:30:59.582079 ntpd[1429]: Listen normally on 3 eth0 10.128.0.13:123 Dec 13 01:30:59.776028 systemd-logind[1449]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 01:30:59.582153 ntpd[1429]: Listen normally on 4 lo [::1]:123 Dec 13 01:30:59.776060 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:30:59.582243 ntpd[1429]: bind(21) AF_INET6 fe80::4001:aff:fe80:d%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:30:59.777144 systemd-logind[1449]: New seat seat0. Dec 13 01:30:59.582275 ntpd[1429]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:d%2#123 Dec 13 01:30:59.582298 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:d%2 Dec 13 01:30:59.582442 ntpd[1429]: Listening on routing socket on fd #21 for interface updates Dec 13 01:30:59.587818 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:59.587872 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:59.786028 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:30:59.802995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:30:59.803268 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:30:59.846965 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:30:59.847871 jq[1459]: true Dec 13 01:30:59.872782 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:30:59.893006 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:30:59.921555 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:30:59.943003 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:30:59.943298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:30:59.943577 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:30:59.951041 tar[1458]: linux-amd64/helm Dec 13 01:30:59.971012 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:30:59.966981 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:30:59.973762 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:30:59.977971 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:30:59.978234 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:30:59.999449 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:31:00.020226 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:31:00.045195 systemd[1]: Starting sshkeys.service... Dec 13 01:31:00.133159 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:31:00.162079 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:31:00.172471 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:31:00.196648 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:31:00.250815 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:31:00.252797 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:31:00.273838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:31:00.280286 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:31:00.282498 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1490 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:31:00.283965 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:31:00.292756 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:31:00.304950 coreos-metadata[1501]: Dec 13 01:31:00.304 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 01:31:00.305396 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:31:00.310085 coreos-metadata[1501]: Dec 13 01:31:00.309 INFO Fetch failed with 404: resource not found Dec 13 01:31:00.310085 coreos-metadata[1501]: Dec 13 01:31:00.309 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 01:31:00.311557 coreos-metadata[1501]: Dec 13 01:31:00.310 INFO Fetch successful Dec 13 01:31:00.311557 coreos-metadata[1501]: Dec 13 01:31:00.310 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 01:31:00.317054 coreos-metadata[1501]: Dec 13 01:31:00.316 INFO Fetch failed with 404: resource not found Dec 13 01:31:00.317054 coreos-metadata[1501]: Dec 13 01:31:00.316 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 01:31:00.318639 coreos-metadata[1501]: Dec 13 01:31:00.318 INFO Fetch failed with 404: resource not found Dec 13 01:31:00.318639 coreos-metadata[1501]: Dec 13 01:31:00.318 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 01:31:00.319657 coreos-metadata[1501]: Dec 13 01:31:00.319 INFO Fetch successful Dec 13 01:31:00.329902 unknown[1501]: wrote ssh authorized keys file for user: core Dec 13 01:31:00.354417 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:31:00.374337 update-ssh-keys[1520]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:31:00.385195 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:31:00.400911 polkitd[1518]: Started polkitd version 121 Dec 13 01:31:00.403857 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:31:00.414114 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:31:00.419861 polkitd[1518]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:31:00.420135 polkitd[1518]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:31:00.421706 polkitd[1518]: Finished loading, compiling and executing 2 rules Dec 13 01:31:00.422489 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:31:00.423208 polkitd[1518]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:31:00.425599 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:31:00.436343 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:31:00.451914 systemd[1]: Finished sshkeys.service. Dec 13 01:31:00.479277 systemd-hostnamed[1490]: Hostname set to (transient) Dec 13 01:31:00.481022 systemd-resolved[1373]: System hostname changed to 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal'. Dec 13 01:31:00.545464 containerd[1460]: time="2024-12-13T01:31:00.543952928Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:31:00.555630 ntpd[1429]: bind(24) AF_INET6 fe80::4001:aff:fe80:d%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:31:00.556139 ntpd[1429]: 13 Dec 01:31:00 ntpd[1429]: bind(24) AF_INET6 fe80::4001:aff:fe80:d%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:31:00.556234 ntpd[1429]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:d%2#123 Dec 13 01:31:00.556382 ntpd[1429]: 13 Dec 01:31:00 ntpd[1429]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:d%2#123 Dec 13 01:31:00.556470 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:d%2 Dec 13 01:31:00.556563 ntpd[1429]: 13 Dec 01:31:00 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:d%2 Dec 13 01:31:00.598557 containerd[1460]: time="2024-12-13T01:31:00.598461387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:00.601687 containerd[1460]: time="2024-12-13T01:31:00.601310532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:00.601687 containerd[1460]: time="2024-12-13T01:31:00.601414326Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:31:00.601687 containerd[1460]: time="2024-12-13T01:31:00.601471849Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:31:00.601907 containerd[1460]: time="2024-12-13T01:31:00.601748803Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:31:00.601907 containerd[1460]: time="2024-12-13T01:31:00.601796715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:00.602008 containerd[1460]: time="2024-12-13T01:31:00.601905259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:00.602008 containerd[1460]: time="2024-12-13T01:31:00.601927989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:00.602606 containerd[1460]: time="2024-12-13T01:31:00.602298772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:00.602606 containerd[1460]: time="2024-12-13T01:31:00.602334366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:00.602606 containerd[1460]: time="2024-12-13T01:31:00.602400448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:00.602606 containerd[1460]: time="2024-12-13T01:31:00.602422013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:00.602606 containerd[1460]: time="2024-12-13T01:31:00.602580898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:00.603484 containerd[1460]: time="2024-12-13T01:31:00.602994549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:00.603484 containerd[1460]: time="2024-12-13T01:31:00.603288851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:00.603484 containerd[1460]: time="2024-12-13T01:31:00.603338613Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:31:00.603648 containerd[1460]: time="2024-12-13T01:31:00.603578715Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:31:00.603736 containerd[1460]: time="2024-12-13T01:31:00.603712188Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:31:00.612108 containerd[1460]: time="2024-12-13T01:31:00.611999028Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:31:00.612108 containerd[1460]: time="2024-12-13T01:31:00.612085825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:31:00.612108 containerd[1460]: time="2024-12-13T01:31:00.612112936Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.612135648Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.612161025Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.612411916Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.612839892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.612998625Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613017196Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613037242Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613054346Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613068398Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613082852Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613098635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613113344Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613127163Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.614515 containerd[1460]: time="2024-12-13T01:31:00.613140023Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613152145Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613179529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613200873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613214533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613228729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613246839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613266666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613285183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613306150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613320286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613343517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613394745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613423866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613444238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615106 containerd[1460]: time="2024-12-13T01:31:00.613481400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613519515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613539744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613563647Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613643219Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613671463Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613708339Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613736333Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613754304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613782322Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613801799Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:31:00.615788 containerd[1460]: time="2024-12-13T01:31:00.613820252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:31:00.616339 containerd[1460]: time="2024-12-13T01:31:00.614296244Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:31:00.616931 containerd[1460]: time="2024-12-13T01:31:00.616747994Z" level=info msg="Connect containerd service" Dec 13 01:31:00.616931 containerd[1460]: time="2024-12-13T01:31:00.616815654Z" level=info msg="using legacy CRI server" Dec 13 01:31:00.616931 containerd[1460]: time="2024-12-13T01:31:00.616831114Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:31:00.621305 containerd[1460]: time="2024-12-13T01:31:00.620447426Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.621721389Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622224626Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622293125Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622344993Z" level=info msg="Start subscribing containerd event" Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622431142Z" level=info msg="Start recovering state" Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622522710Z" level=info msg="Start event monitor" Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622549962Z" level=info msg="Start snapshots syncer" Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622564938Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:31:00.622922 containerd[1460]: time="2024-12-13T01:31:00.622579916Z" level=info msg="Start streaming server" Dec 13 01:31:00.622784 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:31:00.625448 containerd[1460]: time="2024-12-13T01:31:00.625418775Z" level=info msg="containerd successfully booted in 0.083158s" Dec 13 01:31:00.796549 systemd-networkd[1372]: eth0: Gained IPv6LL Dec 13 01:31:00.803016 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:31:00.815607 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:31:00.836696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:00.856423 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:31:00.875754 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 13 01:31:00.909045 init.sh[1540]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 01:31:00.909623 init.sh[1540]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 01:31:00.909623 init.sh[1540]: + /usr/bin/google_instance_setup Dec 13 01:31:00.910194 tar[1458]: linux-amd64/LICENSE Dec 13 01:31:00.910194 tar[1458]: linux-amd64/README.md Dec 13 01:31:00.922430 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:31:00.951273 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:31:01.483542 instance-setup[1547]: INFO Running google_set_multiqueue. Dec 13 01:31:01.504112 instance-setup[1547]: INFO Set channels for eth0 to 2. Dec 13 01:31:01.509298 instance-setup[1547]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 01:31:01.511295 instance-setup[1547]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 01:31:01.511638 instance-setup[1547]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 01:31:01.514007 instance-setup[1547]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 01:31:01.514080 instance-setup[1547]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 01:31:01.516170 instance-setup[1547]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 01:31:01.516228 instance-setup[1547]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 01:31:01.518212 instance-setup[1547]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 01:31:01.526280 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:31:01.531311 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:31:01.533458 instance-setup[1547]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 01:31:01.533509 instance-setup[1547]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 01:31:01.553158 init.sh[1540]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 01:31:01.710557 startup-script[1583]: INFO Starting startup scripts. Dec 13 01:31:01.716967 startup-script[1583]: INFO No startup scripts found in metadata. Dec 13 01:31:01.717258 startup-script[1583]: INFO Finished running startup scripts. Dec 13 01:31:01.740694 init.sh[1540]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 01:31:01.740694 init.sh[1540]: + daemon_pids=() Dec 13 01:31:01.741196 init.sh[1540]: + for d in accounts clock_skew network Dec 13 01:31:01.743385 init.sh[1540]: + daemon_pids+=($!) Dec 13 01:31:01.743385 init.sh[1540]: + for d in accounts clock_skew network Dec 13 01:31:01.743385 init.sh[1540]: + daemon_pids+=($!) Dec 13 01:31:01.743385 init.sh[1540]: + for d in accounts clock_skew network Dec 13 01:31:01.743612 init.sh[1586]: + /usr/bin/google_accounts_daemon Dec 13 01:31:01.743965 init.sh[1587]: + /usr/bin/google_clock_skew_daemon Dec 13 01:31:01.744209 init.sh[1588]: + /usr/bin/google_network_daemon Dec 13 01:31:01.744489 init.sh[1540]: + daemon_pids+=($!) Dec 13 01:31:01.744489 init.sh[1540]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 01:31:01.744489 init.sh[1540]: + /usr/bin/systemd-notify --ready Dec 13 01:31:01.764010 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 13 01:31:01.778544 init.sh[1540]: + wait -n 1586 1587 1588 Dec 13 01:31:01.928150 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:01.951813 systemd[1]: Started sshd@0-10.128.0.13:22-147.75.109.163:55076.service - OpenSSH per-connection server daemon (147.75.109.163:55076). Dec 13 01:31:02.127319 groupadd[1599]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 01:31:02.135198 groupadd[1599]: group added to /etc/gshadow: name=google-sudoers Dec 13 01:31:02.169172 google-networking[1588]: INFO Starting Google Networking daemon. Dec 13 01:31:02.179210 google-clock-skew[1587]: INFO Starting Google Clock Skew daemon. Dec 13 01:31:02.190527 google-clock-skew[1587]: INFO Clock drift token has changed: 0. Dec 13 01:31:02.214248 groupadd[1599]: new group: name=google-sudoers, GID=1000 Dec 13 01:31:02.252888 google-accounts[1586]: INFO Starting Google Accounts daemon. Dec 13 01:31:02.269106 google-accounts[1586]: WARNING OS Login not installed. Dec 13 01:31:02.272519 google-accounts[1586]: INFO Creating a new user account for 0. Dec 13 01:31:02.280380 init.sh[1609]: useradd: invalid user name '0': use --badname to ignore Dec 13 01:31:02.280876 google-accounts[1586]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 01:31:02.000445 systemd-resolved[1373]: Clock change detected. Flushing caches. Dec 13 01:31:02.017466 systemd-journald[1115]: Time jumped backwards, rotating. Dec 13 01:31:02.017566 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 55076 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:31:02.001263 google-clock-skew[1587]: INFO Synced system time with hardware clock. Dec 13 01:31:02.017520 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:02.036100 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:02.053369 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:02.070128 systemd-logind[1449]: New session 1 of user core. Dec 13 01:31:02.078286 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:02.098475 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:02.130206 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:02.247279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:02.250200 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:02.264436 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:31:02.322108 systemd[1614]: Queued start job for default target default.target. Dec 13 01:31:02.331632 systemd[1614]: Created slice app.slice - User Application Slice. Dec 13 01:31:02.331686 systemd[1614]: Reached target paths.target - Paths. Dec 13 01:31:02.331711 systemd[1614]: Reached target timers.target - Timers. Dec 13 01:31:02.333836 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:02.365159 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:02.366400 systemd[1614]: Reached target sockets.target - Sockets. Dec 13 01:31:02.366431 systemd[1614]: Reached target basic.target - Basic System. Dec 13 01:31:02.366509 systemd[1614]: Reached target default.target - Main User Target. Dec 13 01:31:02.366570 systemd[1614]: Startup finished in 225ms. Dec 13 01:31:02.366876 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:02.384243 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:02.395204 systemd[1]: Startup finished in 1.055s (kernel) + 8.894s (initrd) + 9.121s (userspace) = 19.071s. Dec 13 01:31:02.636329 systemd[1]: Started sshd@1-10.128.0.13:22-147.75.109.163:55078.service - OpenSSH per-connection server daemon (147.75.109.163:55078). Dec 13 01:31:02.941271 sshd[1639]: Accepted publickey for core from 147.75.109.163 port 55078 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:31:02.943296 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:02.949978 systemd-logind[1449]: New session 2 of user core. Dec 13 01:31:02.955162 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:03.158747 sshd[1639]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:03.165082 systemd[1]: sshd@1-10.128.0.13:22-147.75.109.163:55078.service: Deactivated successfully. Dec 13 01:31:03.167771 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:03.169194 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:03.170644 systemd-logind[1449]: Removed session 2. Dec 13 01:31:03.217331 systemd[1]: Started sshd@2-10.128.0.13:22-147.75.109.163:55086.service - OpenSSH per-connection server daemon (147.75.109.163:55086). Dec 13 01:31:03.227173 ntpd[1429]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:d%2]:123 Dec 13 01:31:03.229210 ntpd[1429]: 13 Dec 01:31:03 ntpd[1429]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:d%2]:123 Dec 13 01:31:03.365256 kubelet[1625]: E1213 01:31:03.365135 1625 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:03.368309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:03.368567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:03.369099 systemd[1]: kubelet.service: Consumed 1.263s CPU time. Dec 13 01:31:03.531126 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 55086 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:31:03.533175 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:03.539890 systemd-logind[1449]: New session 3 of user core. Dec 13 01:31:03.558260 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:03.747244 sshd[1648]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:03.751560 systemd[1]: sshd@2-10.128.0.13:22-147.75.109.163:55086.service: Deactivated successfully. Dec 13 01:31:03.754051 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:03.756007 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:03.757479 systemd-logind[1449]: Removed session 3. Dec 13 01:31:03.801752 systemd[1]: Started sshd@3-10.128.0.13:22-147.75.109.163:55102.service - OpenSSH per-connection server daemon (147.75.109.163:55102). Dec 13 01:31:04.095212 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 55102 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:31:04.097106 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:04.103278 systemd-logind[1449]: New session 4 of user core. Dec 13 01:31:04.113238 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:04.310420 sshd[1656]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:04.314688 systemd[1]: sshd@3-10.128.0.13:22-147.75.109.163:55102.service: Deactivated successfully. Dec 13 01:31:04.317052 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:04.318985 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:04.320353 systemd-logind[1449]: Removed session 4. Dec 13 01:31:04.365339 systemd[1]: Started sshd@4-10.128.0.13:22-147.75.109.163:55108.service - OpenSSH per-connection server daemon (147.75.109.163:55108). Dec 13 01:31:04.648652 sshd[1663]: Accepted publickey for core from 147.75.109.163 port 55108 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:31:04.650615 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:04.657054 systemd-logind[1449]: New session 5 of user core. Dec 13 01:31:04.663155 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:04.841247 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:31:04.841752 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:04.858796 sudo[1666]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:04.901975 sshd[1663]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:04.907587 systemd[1]: sshd@4-10.128.0.13:22-147.75.109.163:55108.service: Deactivated successfully. Dec 13 01:31:04.909963 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:31:04.911898 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:31:04.913515 systemd-logind[1449]: Removed session 5. Dec 13 01:31:04.955217 systemd[1]: Started sshd@5-10.128.0.13:22-147.75.109.163:55120.service - OpenSSH per-connection server daemon (147.75.109.163:55120). Dec 13 01:31:05.259178 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 55120 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:31:05.261129 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:05.267007 systemd-logind[1449]: New session 6 of user core. Dec 13 01:31:05.275202 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:31:05.438586 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:31:05.439138 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:05.444187 sudo[1675]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:05.457818 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:31:05.458336 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:05.481766 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:05.483956 auditctl[1678]: No rules Dec 13 01:31:05.484434 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:31:05.484698 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:05.491430 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:05.526271 augenrules[1696]: No rules Dec 13 01:31:05.527351 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:05.529252 sudo[1674]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:05.572568 sshd[1671]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:05.578472 systemd[1]: sshd@5-10.128.0.13:22-147.75.109.163:55120.service: Deactivated successfully. Dec 13 01:31:05.580698 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:31:05.581694 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:31:05.583490 systemd-logind[1449]: Removed session 6. Dec 13 01:31:05.634680 systemd[1]: Started sshd@6-10.128.0.13:22-147.75.109.163:55124.service - OpenSSH per-connection server daemon (147.75.109.163:55124). Dec 13 01:31:05.917913 sshd[1704]: Accepted publickey for core from 147.75.109.163 port 55124 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:31:05.920464 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:05.927382 systemd-logind[1449]: New session 7 of user core. Dec 13 01:31:05.938255 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:31:06.096108 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:06.096598 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:06.541324 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:06.544380 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:07.005195 dockerd[1723]: time="2024-12-13T01:31:07.005024974Z" level=info msg="Starting up" Dec 13 01:31:07.199307 dockerd[1723]: time="2024-12-13T01:31:07.199246276Z" level=info msg="Loading containers: start." Dec 13 01:31:07.362955 kernel: Initializing XFRM netlink socket Dec 13 01:31:07.470680 systemd-networkd[1372]: docker0: Link UP Dec 13 01:31:07.492767 dockerd[1723]: time="2024-12-13T01:31:07.492691446Z" level=info msg="Loading containers: done." Dec 13 01:31:07.513855 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3362346218-merged.mount: Deactivated successfully. Dec 13 01:31:07.514976 dockerd[1723]: time="2024-12-13T01:31:07.514874806Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:07.515101 dockerd[1723]: time="2024-12-13T01:31:07.515063661Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:07.515265 dockerd[1723]: time="2024-12-13T01:31:07.515224237Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:07.560616 dockerd[1723]: time="2024-12-13T01:31:07.560470130Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:07.560754 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:08.648165 containerd[1460]: time="2024-12-13T01:31:08.648111850Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:31:09.136824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804093540.mount: Deactivated successfully. Dec 13 01:31:10.991723 containerd[1460]: time="2024-12-13T01:31:10.991622897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:10.993745 containerd[1460]: time="2024-12-13T01:31:10.993643495Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35145882" Dec 13 01:31:10.995790 containerd[1460]: time="2024-12-13T01:31:10.995712232Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:11.000898 containerd[1460]: time="2024-12-13T01:31:11.000811913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:11.002769 containerd[1460]: time="2024-12-13T01:31:11.002513696Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.354342606s" Dec 13 01:31:11.002769 containerd[1460]: time="2024-12-13T01:31:11.002573315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:31:11.035440 containerd[1460]: time="2024-12-13T01:31:11.035352887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:31:12.844364 containerd[1460]: time="2024-12-13T01:31:12.844288591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:12.846021 containerd[1460]: time="2024-12-13T01:31:12.845946733Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32219666" Dec 13 01:31:12.847559 containerd[1460]: time="2024-12-13T01:31:12.847488476Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:12.851767 containerd[1460]: time="2024-12-13T01:31:12.851685307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:12.853426 containerd[1460]: time="2024-12-13T01:31:12.853206735Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.817792158s" Dec 13 01:31:12.853426 containerd[1460]: time="2024-12-13T01:31:12.853257314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:31:12.883016 containerd[1460]: time="2024-12-13T01:31:12.882969882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:31:13.618839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:31:13.629305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:13.898297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:13.907641 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:14.013479 kubelet[1947]: E1213 01:31:14.013386 1947 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:14.021250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:14.021528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:14.322902 containerd[1460]: time="2024-12-13T01:31:14.322728094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:14.324974 containerd[1460]: time="2024-12-13T01:31:14.324762468Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17334738" Dec 13 01:31:14.326728 containerd[1460]: time="2024-12-13T01:31:14.326659572Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:14.330855 containerd[1460]: time="2024-12-13T01:31:14.330781755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:14.333552 containerd[1460]: time="2024-12-13T01:31:14.332335490Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.449305992s" Dec 13 01:31:14.333552 containerd[1460]: time="2024-12-13T01:31:14.332397131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:31:14.363429 containerd[1460]: time="2024-12-13T01:31:14.363380167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:31:15.571250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199460191.mount: Deactivated successfully. Dec 13 01:31:16.125040 containerd[1460]: time="2024-12-13T01:31:16.124965444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:16.126561 containerd[1460]: time="2024-12-13T01:31:16.126486971Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28621853" Dec 13 01:31:16.128294 containerd[1460]: time="2024-12-13T01:31:16.128223783Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:16.131983 containerd[1460]: time="2024-12-13T01:31:16.131919219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:16.133058 containerd[1460]: time="2024-12-13T01:31:16.132844755Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.769332405s" Dec 13 01:31:16.133058 containerd[1460]: time="2024-12-13T01:31:16.132898889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:31:16.165685 containerd[1460]: time="2024-12-13T01:31:16.165632808Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:31:16.651350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093209515.mount: Deactivated successfully. Dec 13 01:31:17.730708 containerd[1460]: time="2024-12-13T01:31:17.730627705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:17.732436 containerd[1460]: time="2024-12-13T01:31:17.732352456Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Dec 13 01:31:17.734482 containerd[1460]: time="2024-12-13T01:31:17.734409087Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:17.738555 containerd[1460]: time="2024-12-13T01:31:17.738462239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:17.740180 containerd[1460]: time="2024-12-13T01:31:17.739964324Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.574249801s" Dec 13 01:31:17.740180 containerd[1460]: time="2024-12-13T01:31:17.740047051Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:31:17.769790 containerd[1460]: time="2024-12-13T01:31:17.769738912Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:31:18.200014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771699148.mount: Deactivated successfully. Dec 13 01:31:18.208124 containerd[1460]: time="2024-12-13T01:31:18.208058323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:18.212519 containerd[1460]: time="2024-12-13T01:31:18.212447801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Dec 13 01:31:18.217769 containerd[1460]: time="2024-12-13T01:31:18.217692982Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:18.220662 containerd[1460]: time="2024-12-13T01:31:18.220594285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:18.221957 containerd[1460]: time="2024-12-13T01:31:18.221739609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 451.94715ms" Dec 13 01:31:18.221957 containerd[1460]: time="2024-12-13T01:31:18.221785664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:31:18.251541 containerd[1460]: time="2024-12-13T01:31:18.251493597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:31:18.697123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529947465.mount: Deactivated successfully. Dec 13 01:31:20.921025 containerd[1460]: time="2024-12-13T01:31:20.920945999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:20.922654 containerd[1460]: time="2024-12-13T01:31:20.922586526Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Dec 13 01:31:20.924054 containerd[1460]: time="2024-12-13T01:31:20.924015299Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:20.930193 containerd[1460]: time="2024-12-13T01:31:20.930102480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:20.932071 containerd[1460]: time="2024-12-13T01:31:20.931694643Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.680112237s" Dec 13 01:31:20.932071 containerd[1460]: time="2024-12-13T01:31:20.931745606Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:31:24.110242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:31:24.119075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:24.428247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:24.448576 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:24.542815 kubelet[2139]: E1213 01:31:24.542632 2139 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:24.546248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:24.546483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:24.620194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:24.627436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:24.660753 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit session-7.scope)... Dec 13 01:31:24.660999 systemd[1]: Reloading... Dec 13 01:31:24.829962 zram_generator::config[2190]: No configuration found. Dec 13 01:31:24.975530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:25.078455 systemd[1]: Reloading finished in 416 ms. Dec 13 01:31:25.145364 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:31:25.145499 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:31:25.145849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:25.151306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:25.392170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:25.402615 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:25.463386 kubelet[2245]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:25.463386 kubelet[2245]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:25.463386 kubelet[2245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:25.463971 kubelet[2245]: I1213 01:31:25.463485 2245 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:26.246701 kubelet[2245]: I1213 01:31:26.246662 2245 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:31:26.246977 kubelet[2245]: I1213 01:31:26.246856 2245 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:26.247216 kubelet[2245]: I1213 01:31:26.247181 2245 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:31:26.280159 kubelet[2245]: E1213 01:31:26.280094 2245 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.281178 kubelet[2245]: I1213 01:31:26.280992 2245 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:26.296390 kubelet[2245]: I1213 01:31:26.296340 2245 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:26.296896 kubelet[2245]: I1213 01:31:26.296868 2245 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:26.297194 kubelet[2245]: I1213 01:31:26.297169 2245 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:31:26.297406 kubelet[2245]: I1213 01:31:26.297207 2245 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:26.297406 kubelet[2245]: I1213 01:31:26.297227 2245 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:31:26.297501 kubelet[2245]: I1213 01:31:26.297443 2245 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:26.297629 kubelet[2245]: I1213 01:31:26.297610 2245 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:31:26.297629 kubelet[2245]: I1213 01:31:26.297641 2245 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:26.299073 kubelet[2245]: I1213 01:31:26.297690 2245 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:31:26.299073 kubelet[2245]: I1213 01:31:26.297716 2245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:26.306364 kubelet[2245]: W1213 01:31:26.306158 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.306364 kubelet[2245]: E1213 01:31:26.306243 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.306839 kubelet[2245]: W1213 01:31:26.306636 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.306839 kubelet[2245]: E1213 01:31:26.306708 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.307785 kubelet[2245]: I1213 01:31:26.307589 2245 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:26.312653 kubelet[2245]: I1213 01:31:26.312626 2245 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:26.312845 kubelet[2245]: W1213 01:31:26.312831 2245 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:31:26.315948 kubelet[2245]: I1213 01:31:26.314046 2245 server.go:1256] "Started kubelet" Dec 13 01:31:26.315948 kubelet[2245]: I1213 01:31:26.314467 2245 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:26.315948 kubelet[2245]: I1213 01:31:26.315667 2245 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:31:26.315948 kubelet[2245]: I1213 01:31:26.315718 2245 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:26.317608 kubelet[2245]: I1213 01:31:26.317582 2245 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:26.318084 kubelet[2245]: I1213 01:31:26.318054 2245 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:26.324560 kubelet[2245]: I1213 01:31:26.324505 2245 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:31:26.333443 kubelet[2245]: I1213 01:31:26.332545 2245 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:31:26.333443 kubelet[2245]: I1213 01:31:26.332673 2245 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:31:26.333827 kubelet[2245]: E1213 01:31:26.333802 2245 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal.181098755946096d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,UID:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 01:31:26.313998701 +0000 UTC m=+0.905295595,LastTimestamp:2024-12-13 01:31:26.313998701 +0000 UTC m=+0.905295595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,}" Dec 13 01:31:26.334250 kubelet[2245]: E1213 01:31:26.334225 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="200ms" Dec 13 01:31:26.338450 kubelet[2245]: W1213 01:31:26.338378 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.338646 kubelet[2245]: E1213 01:31:26.338626 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.340509 kubelet[2245]: I1213 01:31:26.340303 2245 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:26.340509 kubelet[2245]: I1213 01:31:26.340323 2245 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:26.340509 kubelet[2245]: I1213 01:31:26.340470 2245 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:26.353030 kubelet[2245]: I1213 01:31:26.352988 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:26.362049 kubelet[2245]: I1213 01:31:26.361672 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:26.362049 kubelet[2245]: I1213 01:31:26.361720 2245 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:26.362049 kubelet[2245]: I1213 01:31:26.361745 2245 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:31:26.362049 kubelet[2245]: E1213 01:31:26.361819 2245 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:26.364690 kubelet[2245]: W1213 01:31:26.364354 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.364690 kubelet[2245]: E1213 01:31:26.364542 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:26.377687 kubelet[2245]: I1213 01:31:26.377635 2245 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:26.377687 kubelet[2245]: I1213 01:31:26.377692 2245 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:26.377869 kubelet[2245]: I1213 01:31:26.377724 2245 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:26.383492 kubelet[2245]: I1213 01:31:26.383408 2245 policy_none.go:49] "None policy: Start" Dec 13 01:31:26.385169 kubelet[2245]: I1213 01:31:26.385106 2245 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:26.385169 kubelet[2245]: I1213 01:31:26.385183 2245 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:26.401370 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:31:26.412076 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:31:26.416706 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:31:26.424457 kubelet[2245]: I1213 01:31:26.424410 2245 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:26.425078 kubelet[2245]: I1213 01:31:26.424815 2245 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:26.428393 kubelet[2245]: E1213 01:31:26.428099 2245 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" not found" Dec 13 01:31:26.434809 kubelet[2245]: I1213 01:31:26.434762 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.435305 kubelet[2245]: E1213 01:31:26.435284 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.462701 kubelet[2245]: I1213 01:31:26.462639 2245 topology_manager.go:215] "Topology Admit Handler" podUID="65de25f3617b35ad49bbc31f8a6facba" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.471039 kubelet[2245]: I1213 01:31:26.470884 2245 topology_manager.go:215] "Topology Admit Handler" podUID="5f28d1a76efccc1833b75c78a27f7dec" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.482155 kubelet[2245]: I1213 01:31:26.481836 2245 topology_manager.go:215] "Topology Admit Handler" podUID="c146b7a798074169e5e43a4503947eef" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.488649 systemd[1]: Created slice kubepods-burstable-pod65de25f3617b35ad49bbc31f8a6facba.slice - libcontainer container kubepods-burstable-pod65de25f3617b35ad49bbc31f8a6facba.slice. Dec 13 01:31:26.501462 systemd[1]: Created slice kubepods-burstable-pod5f28d1a76efccc1833b75c78a27f7dec.slice - libcontainer container kubepods-burstable-pod5f28d1a76efccc1833b75c78a27f7dec.slice. Dec 13 01:31:26.512376 systemd[1]: Created slice kubepods-burstable-podc146b7a798074169e5e43a4503947eef.slice - libcontainer container kubepods-burstable-podc146b7a798074169e5e43a4503947eef.slice. Dec 13 01:31:26.533484 kubelet[2245]: I1213 01:31:26.533379 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533632 kubelet[2245]: I1213 01:31:26.533557 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f28d1a76efccc1833b75c78a27f7dec-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"5f28d1a76efccc1833b75c78a27f7dec\") " pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533632 kubelet[2245]: I1213 01:31:26.533605 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533764 kubelet[2245]: I1213 01:31:26.533648 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f28d1a76efccc1833b75c78a27f7dec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"5f28d1a76efccc1833b75c78a27f7dec\") " pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533764 kubelet[2245]: I1213 01:31:26.533696 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533764 kubelet[2245]: I1213 01:31:26.533735 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533914 kubelet[2245]: I1213 01:31:26.533787 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533914 kubelet[2245]: I1213 01:31:26.533827 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65de25f3617b35ad49bbc31f8a6facba-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"65de25f3617b35ad49bbc31f8a6facba\") " pod="kube-system/kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.533914 kubelet[2245]: I1213 01:31:26.533863 2245 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f28d1a76efccc1833b75c78a27f7dec-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"5f28d1a76efccc1833b75c78a27f7dec\") " pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.535647 kubelet[2245]: E1213 01:31:26.535602 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="400ms" Dec 13 01:31:26.641281 kubelet[2245]: I1213 01:31:26.641208 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.641754 kubelet[2245]: E1213 01:31:26.641715 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:26.796976 containerd[1460]: time="2024-12-13T01:31:26.796820901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,Uid:65de25f3617b35ad49bbc31f8a6facba,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:26.813287 containerd[1460]: time="2024-12-13T01:31:26.813218031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,Uid:5f28d1a76efccc1833b75c78a27f7dec,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:26.816077 containerd[1460]: time="2024-12-13T01:31:26.816022762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,Uid:c146b7a798074169e5e43a4503947eef,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:26.937083 kubelet[2245]: E1213 01:31:26.937043 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="800ms" Dec 13 01:31:27.048814 kubelet[2245]: I1213 01:31:27.048659 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:27.049390 kubelet[2245]: E1213 01:31:27.049347 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:27.190752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457112969.mount: Deactivated successfully. Dec 13 01:31:27.201002 containerd[1460]: time="2024-12-13T01:31:27.200916587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:27.202249 containerd[1460]: time="2024-12-13T01:31:27.202173805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Dec 13 01:31:27.203765 containerd[1460]: time="2024-12-13T01:31:27.203713822Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:27.205137 containerd[1460]: time="2024-12-13T01:31:27.205093929Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:27.206574 containerd[1460]: time="2024-12-13T01:31:27.206509307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:27.208718 containerd[1460]: time="2024-12-13T01:31:27.208658093Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:27.210067 containerd[1460]: time="2024-12-13T01:31:27.209920855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:27.213773 containerd[1460]: time="2024-12-13T01:31:27.213637978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:27.216444 containerd[1460]: time="2024-12-13T01:31:27.215767332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 418.84313ms" Dec 13 01:31:27.217268 containerd[1460]: time="2024-12-13T01:31:27.217218521Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 401.113176ms" Dec 13 01:31:27.219076 containerd[1460]: time="2024-12-13T01:31:27.219029878Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 405.721538ms" Dec 13 01:31:27.239639 kubelet[2245]: W1213 01:31:27.239581 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.239803 kubelet[2245]: E1213 01:31:27.239651 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.277533 kubelet[2245]: W1213 01:31:27.277399 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.277533 kubelet[2245]: E1213 01:31:27.277456 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.323006 kubelet[2245]: W1213 01:31:27.321767 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.323006 kubelet[2245]: E1213 01:31:27.321828 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.440772 containerd[1460]: time="2024-12-13T01:31:27.440317543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:27.440772 containerd[1460]: time="2024-12-13T01:31:27.440419472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:27.440772 containerd[1460]: time="2024-12-13T01:31:27.440459207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:27.440772 containerd[1460]: time="2024-12-13T01:31:27.440602658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:27.445170 kubelet[2245]: W1213 01:31:27.444691 2245 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.445170 kubelet[2245]: E1213 01:31:27.444964 2245 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.13:6443: connect: connection refused Dec 13 01:31:27.446892 containerd[1460]: time="2024-12-13T01:31:27.446208383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:27.446892 containerd[1460]: time="2024-12-13T01:31:27.446312775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:27.446892 containerd[1460]: time="2024-12-13T01:31:27.446339983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:27.452353 containerd[1460]: time="2024-12-13T01:31:27.450403704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:27.461178 containerd[1460]: time="2024-12-13T01:31:27.461033546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:27.463680 containerd[1460]: time="2024-12-13T01:31:27.461112295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:27.464247 containerd[1460]: time="2024-12-13T01:31:27.463914037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:27.464247 containerd[1460]: time="2024-12-13T01:31:27.464096681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:27.488219 systemd[1]: Started cri-containerd-20e8eb0bf12d3b69180560b3a7e8b396290cc10f2c82aabe414629e109079ac9.scope - libcontainer container 20e8eb0bf12d3b69180560b3a7e8b396290cc10f2c82aabe414629e109079ac9. Dec 13 01:31:27.503405 systemd[1]: Started cri-containerd-d7fffcbefd1f0e734003bc0c490503085edeedb4a6baad68c859d7b8e68a4498.scope - libcontainer container d7fffcbefd1f0e734003bc0c490503085edeedb4a6baad68c859d7b8e68a4498. Dec 13 01:31:27.512719 systemd[1]: Started cri-containerd-0e36d983fe309deb8c6a7044be37f99525f33e0d837f07d761f133dcb87bcb99.scope - libcontainer container 0e36d983fe309deb8c6a7044be37f99525f33e0d837f07d761f133dcb87bcb99. Dec 13 01:31:27.613354 containerd[1460]: time="2024-12-13T01:31:27.613130583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,Uid:65de25f3617b35ad49bbc31f8a6facba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e36d983fe309deb8c6a7044be37f99525f33e0d837f07d761f133dcb87bcb99\"" Dec 13 01:31:27.615524 containerd[1460]: time="2024-12-13T01:31:27.615455758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,Uid:c146b7a798074169e5e43a4503947eef,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e8eb0bf12d3b69180560b3a7e8b396290cc10f2c82aabe414629e109079ac9\"" Dec 13 01:31:27.620107 kubelet[2245]: E1213 01:31:27.618276 2245 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-21291" Dec 13 01:31:27.622052 kubelet[2245]: E1213 01:31:27.621170 2245 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flat" Dec 13 01:31:27.622638 containerd[1460]: time="2024-12-13T01:31:27.622595146Z" level=info msg="CreateContainer within sandbox \"0e36d983fe309deb8c6a7044be37f99525f33e0d837f07d761f133dcb87bcb99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:31:27.624971 containerd[1460]: time="2024-12-13T01:31:27.624602372Z" level=info msg="CreateContainer within sandbox \"20e8eb0bf12d3b69180560b3a7e8b396290cc10f2c82aabe414629e109079ac9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:31:27.646315 containerd[1460]: time="2024-12-13T01:31:27.646177700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal,Uid:5f28d1a76efccc1833b75c78a27f7dec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7fffcbefd1f0e734003bc0c490503085edeedb4a6baad68c859d7b8e68a4498\"" Dec 13 01:31:27.648750 kubelet[2245]: E1213 01:31:27.648701 2245 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-21291" Dec 13 01:31:27.649196 containerd[1460]: time="2024-12-13T01:31:27.649149077Z" level=info msg="CreateContainer within sandbox \"0e36d983fe309deb8c6a7044be37f99525f33e0d837f07d761f133dcb87bcb99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"977871c4d8490a57f3dd0ae0fd352ac14b537d52835b8ab0e0e786aef77683bb\"" Dec 13 01:31:27.651653 containerd[1460]: time="2024-12-13T01:31:27.650016439Z" level=info msg="StartContainer for \"977871c4d8490a57f3dd0ae0fd352ac14b537d52835b8ab0e0e786aef77683bb\"" Dec 13 01:31:27.655513 containerd[1460]: time="2024-12-13T01:31:27.655470275Z" level=info msg="CreateContainer within sandbox \"d7fffcbefd1f0e734003bc0c490503085edeedb4a6baad68c859d7b8e68a4498\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:31:27.685231 containerd[1460]: time="2024-12-13T01:31:27.685160132Z" level=info msg="CreateContainer within sandbox \"20e8eb0bf12d3b69180560b3a7e8b396290cc10f2c82aabe414629e109079ac9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a31c727d2fee4e29bbe6fded35eb8c8b4a2d485220144b7b43aa0d939357263\"" Dec 13 01:31:27.686536 containerd[1460]: time="2024-12-13T01:31:27.686481160Z" level=info msg="StartContainer for \"9a31c727d2fee4e29bbe6fded35eb8c8b4a2d485220144b7b43aa0d939357263\"" Dec 13 01:31:27.698048 containerd[1460]: time="2024-12-13T01:31:27.697989505Z" level=info msg="CreateContainer within sandbox \"d7fffcbefd1f0e734003bc0c490503085edeedb4a6baad68c859d7b8e68a4498\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8fb5d241b7e68980b588d60b1f7717f7a6c56e448012d64db03bfa9d862bcc0\"" Dec 13 01:31:27.699748 containerd[1460]: time="2024-12-13T01:31:27.699710770Z" level=info msg="StartContainer for \"f8fb5d241b7e68980b588d60b1f7717f7a6c56e448012d64db03bfa9d862bcc0\"" Dec 13 01:31:27.723168 systemd[1]: Started cri-containerd-977871c4d8490a57f3dd0ae0fd352ac14b537d52835b8ab0e0e786aef77683bb.scope - libcontainer container 977871c4d8490a57f3dd0ae0fd352ac14b537d52835b8ab0e0e786aef77683bb. Dec 13 01:31:27.738669 kubelet[2245]: E1213 01:31:27.738628 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.13:6443: connect: connection refused" interval="1.6s" Dec 13 01:31:27.765893 systemd[1]: Started cri-containerd-9a31c727d2fee4e29bbe6fded35eb8c8b4a2d485220144b7b43aa0d939357263.scope - libcontainer container 9a31c727d2fee4e29bbe6fded35eb8c8b4a2d485220144b7b43aa0d939357263. Dec 13 01:31:27.776252 systemd[1]: Started cri-containerd-f8fb5d241b7e68980b588d60b1f7717f7a6c56e448012d64db03bfa9d862bcc0.scope - libcontainer container f8fb5d241b7e68980b588d60b1f7717f7a6c56e448012d64db03bfa9d862bcc0. Dec 13 01:31:27.832632 containerd[1460]: time="2024-12-13T01:31:27.832572533Z" level=info msg="StartContainer for \"977871c4d8490a57f3dd0ae0fd352ac14b537d52835b8ab0e0e786aef77683bb\" returns successfully" Dec 13 01:31:27.859516 kubelet[2245]: I1213 01:31:27.859460 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:27.861639 kubelet[2245]: E1213 01:31:27.861594 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.13:6443/api/v1/nodes\": dial tcp 10.128.0.13:6443: connect: connection refused" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:27.885139 containerd[1460]: time="2024-12-13T01:31:27.882587318Z" level=info msg="StartContainer for \"f8fb5d241b7e68980b588d60b1f7717f7a6c56e448012d64db03bfa9d862bcc0\" returns successfully" Dec 13 01:31:27.905957 containerd[1460]: time="2024-12-13T01:31:27.904942647Z" level=info msg="StartContainer for \"9a31c727d2fee4e29bbe6fded35eb8c8b4a2d485220144b7b43aa0d939357263\" returns successfully" Dec 13 01:31:29.467659 kubelet[2245]: I1213 01:31:29.467425 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:30.191165 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:31:30.836150 kubelet[2245]: E1213 01:31:30.836104 2245 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" not found" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:30.921548 kubelet[2245]: I1213 01:31:30.921473 2245 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:31.306892 kubelet[2245]: I1213 01:31:31.306835 2245 apiserver.go:52] "Watching apiserver" Dec 13 01:31:31.333839 kubelet[2245]: I1213 01:31:31.333767 2245 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:31:33.453167 kubelet[2245]: W1213 01:31:33.452797 2245 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:31:33.568261 systemd[1]: Reloading requested from client PID 2520 ('systemctl') (unit session-7.scope)... Dec 13 01:31:33.568600 systemd[1]: Reloading... Dec 13 01:31:33.722975 zram_generator::config[2563]: No configuration found. Dec 13 01:31:33.871197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:34.004686 systemd[1]: Reloading finished in 435 ms. Dec 13 01:31:34.057986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:34.058690 kubelet[2245]: I1213 01:31:34.058458 2245 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:34.073651 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:31:34.073995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:34.074105 systemd[1]: kubelet.service: Consumed 1.427s CPU time, 113.1M memory peak, 0B memory swap peak. Dec 13 01:31:34.079404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:34.338072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:34.352630 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:34.443903 kubelet[2608]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:34.443903 kubelet[2608]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:34.443903 kubelet[2608]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:34.443903 kubelet[2608]: I1213 01:31:34.441894 2608 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:34.453820 kubelet[2608]: I1213 01:31:34.453780 2608 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:31:34.453820 kubelet[2608]: I1213 01:31:34.453818 2608 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:34.454207 kubelet[2608]: I1213 01:31:34.454183 2608 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:31:34.459057 kubelet[2608]: I1213 01:31:34.458516 2608 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:31:34.462469 kubelet[2608]: I1213 01:31:34.461971 2608 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:34.477599 kubelet[2608]: I1213 01:31:34.477551 2608 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:34.478002 kubelet[2608]: I1213 01:31:34.477979 2608 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:34.478277 kubelet[2608]: I1213 01:31:34.478253 2608 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:31:34.478529 kubelet[2608]: I1213 01:31:34.478290 2608 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:34.478529 kubelet[2608]: I1213 01:31:34.478308 2608 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:31:34.478529 kubelet[2608]: I1213 01:31:34.478363 2608 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:34.478529 kubelet[2608]: I1213 01:31:34.478501 2608 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:31:34.478529 kubelet[2608]: I1213 01:31:34.478521 2608 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:34.478862 kubelet[2608]: I1213 01:31:34.478560 2608 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:31:34.478862 kubelet[2608]: I1213 01:31:34.478578 2608 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:34.484953 kubelet[2608]: I1213 01:31:34.482537 2608 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:34.484953 kubelet[2608]: I1213 01:31:34.483305 2608 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:34.484953 kubelet[2608]: I1213 01:31:34.483841 2608 server.go:1256] "Started kubelet" Dec 13 01:31:34.486599 kubelet[2608]: I1213 01:31:34.486170 2608 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:34.497679 kubelet[2608]: I1213 01:31:34.497650 2608 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:34.501891 kubelet[2608]: I1213 01:31:34.501855 2608 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:34.503973 kubelet[2608]: I1213 01:31:34.502251 2608 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:34.511952 kubelet[2608]: I1213 01:31:34.507241 2608 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:31:34.511952 kubelet[2608]: I1213 01:31:34.509033 2608 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:31:34.511952 kubelet[2608]: I1213 01:31:34.509236 2608 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:31:34.511952 kubelet[2608]: I1213 01:31:34.510998 2608 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:31:34.516667 kubelet[2608]: I1213 01:31:34.516632 2608 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:34.521338 kubelet[2608]: I1213 01:31:34.521299 2608 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:34.541321 kubelet[2608]: E1213 01:31:34.541285 2608 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:34.547747 kubelet[2608]: I1213 01:31:34.546903 2608 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:34.561255 kubelet[2608]: I1213 01:31:34.560216 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:34.569324 kubelet[2608]: I1213 01:31:34.568452 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:34.569585 kubelet[2608]: I1213 01:31:34.569567 2608 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:34.569709 kubelet[2608]: I1213 01:31:34.569688 2608 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:31:34.569919 kubelet[2608]: E1213 01:31:34.569903 2608 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:34.627344 kubelet[2608]: I1213 01:31:34.627305 2608 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.644465 kubelet[2608]: I1213 01:31:34.644071 2608 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.644465 kubelet[2608]: I1213 01:31:34.644174 2608 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.673067 kubelet[2608]: E1213 01:31:34.673003 2608 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:31:34.676480 kubelet[2608]: I1213 01:31:34.676448 2608 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:34.677067 kubelet[2608]: I1213 01:31:34.676675 2608 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:34.677067 kubelet[2608]: I1213 01:31:34.676711 2608 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:34.677234 kubelet[2608]: I1213 01:31:34.677088 2608 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:31:34.677234 kubelet[2608]: I1213 01:31:34.677131 2608 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:31:34.677234 kubelet[2608]: I1213 01:31:34.677162 2608 policy_none.go:49] "None policy: Start" Dec 13 01:31:34.678422 kubelet[2608]: I1213 01:31:34.678285 2608 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:34.678422 kubelet[2608]: I1213 01:31:34.678331 2608 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:34.678862 kubelet[2608]: I1213 01:31:34.678603 2608 state_mem.go:75] "Updated machine memory state" Dec 13 01:31:34.686455 kubelet[2608]: I1213 01:31:34.686427 2608 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:34.688002 kubelet[2608]: I1213 01:31:34.687784 2608 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:34.873730 kubelet[2608]: I1213 01:31:34.873571 2608 topology_manager.go:215] "Topology Admit Handler" podUID="5f28d1a76efccc1833b75c78a27f7dec" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.875428 kubelet[2608]: I1213 01:31:34.874018 2608 topology_manager.go:215] "Topology Admit Handler" podUID="c146b7a798074169e5e43a4503947eef" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.875428 kubelet[2608]: I1213 01:31:34.874602 2608 topology_manager.go:215] "Topology Admit Handler" podUID="65de25f3617b35ad49bbc31f8a6facba" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.888054 kubelet[2608]: W1213 01:31:34.887619 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:31:34.888054 kubelet[2608]: W1213 01:31:34.887671 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:31:34.888054 kubelet[2608]: E1213 01:31:34.887723 2608 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.888054 kubelet[2608]: W1213 01:31:34.887886 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:31:34.915889 kubelet[2608]: I1213 01:31:34.915278 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.915889 kubelet[2608]: I1213 01:31:34.915371 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.915889 kubelet[2608]: I1213 01:31:34.915423 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65de25f3617b35ad49bbc31f8a6facba-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"65de25f3617b35ad49bbc31f8a6facba\") " pod="kube-system/kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.915889 kubelet[2608]: I1213 01:31:34.915467 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f28d1a76efccc1833b75c78a27f7dec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"5f28d1a76efccc1833b75c78a27f7dec\") " pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.916273 kubelet[2608]: I1213 01:31:34.915509 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f28d1a76efccc1833b75c78a27f7dec-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"5f28d1a76efccc1833b75c78a27f7dec\") " pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.916273 kubelet[2608]: I1213 01:31:34.915551 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.916273 kubelet[2608]: I1213 01:31:34.915604 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.916273 kubelet[2608]: I1213 01:31:34.915654 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c146b7a798074169e5e43a4503947eef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"c146b7a798074169e5e43a4503947eef\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:34.916476 kubelet[2608]: I1213 01:31:34.915696 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f28d1a76efccc1833b75c78a27f7dec-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" (UID: \"5f28d1a76efccc1833b75c78a27f7dec\") " pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:35.492274 kubelet[2608]: I1213 01:31:35.492186 2608 apiserver.go:52] "Watching apiserver" Dec 13 01:31:35.509895 kubelet[2608]: I1213 01:31:35.509843 2608 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:31:35.667991 kubelet[2608]: W1213 01:31:35.667944 2608 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:31:35.668162 kubelet[2608]: E1213 01:31:35.668037 2608 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:31:35.769662 kubelet[2608]: I1213 01:31:35.769516 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" podStartSLOduration=1.769423929 podStartE2EDuration="1.769423929s" podCreationTimestamp="2024-12-13 01:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:35.740809436 +0000 UTC m=+1.381656536" watchObservedRunningTime="2024-12-13 01:31:35.769423929 +0000 UTC m=+1.410271031" Dec 13 01:31:35.797475 kubelet[2608]: I1213 01:31:35.797425 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" podStartSLOduration=2.797369557 podStartE2EDuration="2.797369557s" podCreationTimestamp="2024-12-13 01:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:35.770866423 +0000 UTC m=+1.411713523" watchObservedRunningTime="2024-12-13 01:31:35.797369557 +0000 UTC m=+1.438216656" Dec 13 01:31:37.628559 kubelet[2608]: I1213 01:31:37.628493 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" podStartSLOduration=3.628435156 podStartE2EDuration="3.628435156s" podCreationTimestamp="2024-12-13 01:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:35.798336051 +0000 UTC m=+1.439183152" watchObservedRunningTime="2024-12-13 01:31:37.628435156 +0000 UTC m=+3.269282249" Dec 13 01:31:40.112114 sudo[1707]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:40.155339 sshd[1704]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:40.162099 systemd[1]: sshd@6-10.128.0.13:22-147.75.109.163:55124.service: Deactivated successfully. Dec 13 01:31:40.165356 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:31:40.165685 systemd[1]: session-7.scope: Consumed 6.673s CPU time, 193.5M memory peak, 0B memory swap peak. Dec 13 01:31:40.166836 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:31:40.168648 systemd-logind[1449]: Removed session 7. Dec 13 01:31:44.328385 update_engine[1454]: I20241213 01:31:44.328262 1454 update_attempter.cc:509] Updating boot flags... Dec 13 01:31:44.400321 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2696) Dec 13 01:31:44.539106 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2692) Dec 13 01:31:44.657967 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2692) Dec 13 01:31:46.429280 kubelet[2608]: I1213 01:31:46.429132 2608 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:31:46.431390 containerd[1460]: time="2024-12-13T01:31:46.430548573Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:31:46.432077 kubelet[2608]: I1213 01:31:46.430838 2608 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:31:46.991828 kubelet[2608]: I1213 01:31:46.991764 2608 topology_manager.go:215] "Topology Admit Handler" podUID="8f84d418-89b0-4f9f-a325-c8dae5b773d9" podNamespace="kube-system" podName="kube-proxy-jbv4n" Dec 13 01:31:47.008703 systemd[1]: Created slice kubepods-besteffort-pod8f84d418_89b0_4f9f_a325_c8dae5b773d9.slice - libcontainer container kubepods-besteffort-pod8f84d418_89b0_4f9f_a325_c8dae5b773d9.slice. Dec 13 01:31:47.102991 kubelet[2608]: I1213 01:31:47.102866 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f84d418-89b0-4f9f-a325-c8dae5b773d9-xtables-lock\") pod \"kube-proxy-jbv4n\" (UID: \"8f84d418-89b0-4f9f-a325-c8dae5b773d9\") " pod="kube-system/kube-proxy-jbv4n" Dec 13 01:31:47.103212 kubelet[2608]: I1213 01:31:47.103020 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64ws\" (UniqueName: \"kubernetes.io/projected/8f84d418-89b0-4f9f-a325-c8dae5b773d9-kube-api-access-b64ws\") pod \"kube-proxy-jbv4n\" (UID: \"8f84d418-89b0-4f9f-a325-c8dae5b773d9\") " pod="kube-system/kube-proxy-jbv4n" Dec 13 01:31:47.103212 kubelet[2608]: I1213 01:31:47.103066 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f84d418-89b0-4f9f-a325-c8dae5b773d9-kube-proxy\") pod \"kube-proxy-jbv4n\" (UID: \"8f84d418-89b0-4f9f-a325-c8dae5b773d9\") " pod="kube-system/kube-proxy-jbv4n" Dec 13 01:31:47.103212 kubelet[2608]: I1213 01:31:47.103101 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f84d418-89b0-4f9f-a325-c8dae5b773d9-lib-modules\") pod \"kube-proxy-jbv4n\" (UID: \"8f84d418-89b0-4f9f-a325-c8dae5b773d9\") " pod="kube-system/kube-proxy-jbv4n" Dec 13 01:31:47.318611 containerd[1460]: time="2024-12-13T01:31:47.318217049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbv4n,Uid:8f84d418-89b0-4f9f-a325-c8dae5b773d9,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:47.364484 containerd[1460]: time="2024-12-13T01:31:47.364314648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:47.364484 containerd[1460]: time="2024-12-13T01:31:47.364394424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:47.364484 containerd[1460]: time="2024-12-13T01:31:47.364412827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:47.364920 containerd[1460]: time="2024-12-13T01:31:47.364539742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:47.401294 systemd[1]: run-containerd-runc-k8s.io-c70a49b42222d5d398cd5d0c4f9261edc9fdcf7bda99bbf75318f19f41ec97d4-runc.f7DGKv.mount: Deactivated successfully. Dec 13 01:31:47.411590 systemd[1]: Started cri-containerd-c70a49b42222d5d398cd5d0c4f9261edc9fdcf7bda99bbf75318f19f41ec97d4.scope - libcontainer container c70a49b42222d5d398cd5d0c4f9261edc9fdcf7bda99bbf75318f19f41ec97d4. Dec 13 01:31:47.447756 containerd[1460]: time="2024-12-13T01:31:47.447692002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbv4n,Uid:8f84d418-89b0-4f9f-a325-c8dae5b773d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c70a49b42222d5d398cd5d0c4f9261edc9fdcf7bda99bbf75318f19f41ec97d4\"" Dec 13 01:31:47.452773 containerd[1460]: time="2024-12-13T01:31:47.452478948Z" level=info msg="CreateContainer within sandbox \"c70a49b42222d5d398cd5d0c4f9261edc9fdcf7bda99bbf75318f19f41ec97d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:31:47.474210 containerd[1460]: time="2024-12-13T01:31:47.474149859Z" level=info msg="CreateContainer within sandbox \"c70a49b42222d5d398cd5d0c4f9261edc9fdcf7bda99bbf75318f19f41ec97d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc09c668a4324e01cc6e10ca0858eeee100f13e06a84faa13bb2dcad2a930e2a\"" Dec 13 01:31:47.476418 containerd[1460]: time="2024-12-13T01:31:47.475140232Z" level=info msg="StartContainer for \"fc09c668a4324e01cc6e10ca0858eeee100f13e06a84faa13bb2dcad2a930e2a\"" Dec 13 01:31:47.532463 systemd[1]: Started cri-containerd-fc09c668a4324e01cc6e10ca0858eeee100f13e06a84faa13bb2dcad2a930e2a.scope - libcontainer container fc09c668a4324e01cc6e10ca0858eeee100f13e06a84faa13bb2dcad2a930e2a. Dec 13 01:31:47.578782 kubelet[2608]: I1213 01:31:47.578635 2608 topology_manager.go:215] "Topology Admit Handler" podUID="06a3751f-25dc-4596-bb7c-bd90fcc1557e" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-95v24" Dec 13 01:31:47.596395 systemd[1]: Created slice kubepods-besteffort-pod06a3751f_25dc_4596_bb7c_bd90fcc1557e.slice - libcontainer container kubepods-besteffort-pod06a3751f_25dc_4596_bb7c_bd90fcc1557e.slice. Dec 13 01:31:47.598088 kubelet[2608]: W1213 01:31:47.598054 2608 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:31:47.598243 kubelet[2608]: E1213 01:31:47.598107 2608 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:31:47.598243 kubelet[2608]: W1213 01:31:47.598183 2608 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:31:47.598243 kubelet[2608]: E1213 01:31:47.598206 2608 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:31:47.607392 kubelet[2608]: I1213 01:31:47.607340 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j7zm\" (UniqueName: \"kubernetes.io/projected/06a3751f-25dc-4596-bb7c-bd90fcc1557e-kube-api-access-8j7zm\") pod \"tigera-operator-c7ccbd65-95v24\" (UID: \"06a3751f-25dc-4596-bb7c-bd90fcc1557e\") " pod="tigera-operator/tigera-operator-c7ccbd65-95v24" Dec 13 01:31:47.607392 kubelet[2608]: I1213 01:31:47.607402 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06a3751f-25dc-4596-bb7c-bd90fcc1557e-var-lib-calico\") pod \"tigera-operator-c7ccbd65-95v24\" (UID: \"06a3751f-25dc-4596-bb7c-bd90fcc1557e\") " pod="tigera-operator/tigera-operator-c7ccbd65-95v24" Dec 13 01:31:47.695514 containerd[1460]: time="2024-12-13T01:31:47.695445013Z" level=info msg="StartContainer for \"fc09c668a4324e01cc6e10ca0858eeee100f13e06a84faa13bb2dcad2a930e2a\" returns successfully" Dec 13 01:31:48.717357 kubelet[2608]: E1213 01:31:48.717305 2608 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:31:48.717357 kubelet[2608]: E1213 01:31:48.717350 2608 projected.go:200] Error preparing data for projected volume kube-api-access-8j7zm for pod tigera-operator/tigera-operator-c7ccbd65-95v24: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:31:48.718008 kubelet[2608]: E1213 01:31:48.717439 2608 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06a3751f-25dc-4596-bb7c-bd90fcc1557e-kube-api-access-8j7zm podName:06a3751f-25dc-4596-bb7c-bd90fcc1557e nodeName:}" failed. No retries permitted until 2024-12-13 01:31:49.217412003 +0000 UTC m=+14.858259081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8j7zm" (UniqueName: "kubernetes.io/projected/06a3751f-25dc-4596-bb7c-bd90fcc1557e-kube-api-access-8j7zm") pod "tigera-operator-c7ccbd65-95v24" (UID: "06a3751f-25dc-4596-bb7c-bd90fcc1557e") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:31:49.401557 containerd[1460]: time="2024-12-13T01:31:49.401476772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-95v24,Uid:06a3751f-25dc-4596-bb7c-bd90fcc1557e,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:31:49.439281 containerd[1460]: time="2024-12-13T01:31:49.439140016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:49.439281 containerd[1460]: time="2024-12-13T01:31:49.439225234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:49.439281 containerd[1460]: time="2024-12-13T01:31:49.439244761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.439589 containerd[1460]: time="2024-12-13T01:31:49.439390229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.475125 systemd[1]: Started cri-containerd-61b0f63aa026557570a4e072b8e6b5adf0a73900d2e0c355885d1786c6e6bec4.scope - libcontainer container 61b0f63aa026557570a4e072b8e6b5adf0a73900d2e0c355885d1786c6e6bec4. Dec 13 01:31:49.532160 containerd[1460]: time="2024-12-13T01:31:49.532100512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-95v24,Uid:06a3751f-25dc-4596-bb7c-bd90fcc1557e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"61b0f63aa026557570a4e072b8e6b5adf0a73900d2e0c355885d1786c6e6bec4\"" Dec 13 01:31:49.534756 containerd[1460]: time="2024-12-13T01:31:49.534473557Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:31:50.603815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261525946.mount: Deactivated successfully. Dec 13 01:31:51.885361 containerd[1460]: time="2024-12-13T01:31:51.885283290Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.886867 containerd[1460]: time="2024-12-13T01:31:51.886728458Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764313" Dec 13 01:31:51.888364 containerd[1460]: time="2024-12-13T01:31:51.888251408Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.891894 containerd[1460]: time="2024-12-13T01:31:51.891814573Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.892974 containerd[1460]: time="2024-12-13T01:31:51.892823332Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.358294725s" Dec 13 01:31:51.892974 containerd[1460]: time="2024-12-13T01:31:51.892870464Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:31:51.896493 containerd[1460]: time="2024-12-13T01:31:51.896433046Z" level=info msg="CreateContainer within sandbox \"61b0f63aa026557570a4e072b8e6b5adf0a73900d2e0c355885d1786c6e6bec4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:31:51.914866 containerd[1460]: time="2024-12-13T01:31:51.914635979Z" level=info msg="CreateContainer within sandbox \"61b0f63aa026557570a4e072b8e6b5adf0a73900d2e0c355885d1786c6e6bec4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"be5fdae5fe57f5c94c81410f570ad0838d303b5b6303181432eb19cd5f055478\"" Dec 13 01:31:51.918504 containerd[1460]: time="2024-12-13T01:31:51.917359537Z" level=info msg="StartContainer for \"be5fdae5fe57f5c94c81410f570ad0838d303b5b6303181432eb19cd5f055478\"" Dec 13 01:31:51.978180 systemd[1]: Started cri-containerd-be5fdae5fe57f5c94c81410f570ad0838d303b5b6303181432eb19cd5f055478.scope - libcontainer container be5fdae5fe57f5c94c81410f570ad0838d303b5b6303181432eb19cd5f055478. Dec 13 01:31:52.012842 containerd[1460]: time="2024-12-13T01:31:52.012790413Z" level=info msg="StartContainer for \"be5fdae5fe57f5c94c81410f570ad0838d303b5b6303181432eb19cd5f055478\" returns successfully" Dec 13 01:31:52.656124 kubelet[2608]: I1213 01:31:52.656037 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jbv4n" podStartSLOduration=6.655361425 podStartE2EDuration="6.655361425s" podCreationTimestamp="2024-12-13 01:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:48.64669859 +0000 UTC m=+14.287545692" watchObservedRunningTime="2024-12-13 01:31:52.655361425 +0000 UTC m=+18.296208522" Dec 13 01:31:54.602554 kubelet[2608]: I1213 01:31:54.601882 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-95v24" podStartSLOduration=5.242407486 podStartE2EDuration="7.601822517s" podCreationTimestamp="2024-12-13 01:31:47 +0000 UTC" firstStartedPulling="2024-12-13 01:31:49.533845872 +0000 UTC m=+15.174692962" lastFinishedPulling="2024-12-13 01:31:51.893260903 +0000 UTC m=+17.534107993" observedRunningTime="2024-12-13 01:31:52.656340424 +0000 UTC m=+18.297187523" watchObservedRunningTime="2024-12-13 01:31:54.601822517 +0000 UTC m=+20.242669655" Dec 13 01:31:55.403954 kubelet[2608]: I1213 01:31:55.402965 2608 topology_manager.go:215] "Topology Admit Handler" podUID="587acffd-b1c8-4044-a6cc-df7b8e8520af" podNamespace="calico-system" podName="calico-typha-7794f5684f-4dlw7" Dec 13 01:31:55.415317 systemd[1]: Created slice kubepods-besteffort-pod587acffd_b1c8_4044_a6cc_df7b8e8520af.slice - libcontainer container kubepods-besteffort-pod587acffd_b1c8_4044_a6cc_df7b8e8520af.slice. Dec 13 01:31:55.455217 kubelet[2608]: I1213 01:31:55.455173 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587acffd-b1c8-4044-a6cc-df7b8e8520af-tigera-ca-bundle\") pod \"calico-typha-7794f5684f-4dlw7\" (UID: \"587acffd-b1c8-4044-a6cc-df7b8e8520af\") " pod="calico-system/calico-typha-7794f5684f-4dlw7" Dec 13 01:31:55.455552 kubelet[2608]: I1213 01:31:55.455518 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/587acffd-b1c8-4044-a6cc-df7b8e8520af-typha-certs\") pod \"calico-typha-7794f5684f-4dlw7\" (UID: \"587acffd-b1c8-4044-a6cc-df7b8e8520af\") " pod="calico-system/calico-typha-7794f5684f-4dlw7" Dec 13 01:31:55.456037 kubelet[2608]: I1213 01:31:55.455713 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rscml\" (UniqueName: \"kubernetes.io/projected/587acffd-b1c8-4044-a6cc-df7b8e8520af-kube-api-access-rscml\") pod \"calico-typha-7794f5684f-4dlw7\" (UID: \"587acffd-b1c8-4044-a6cc-df7b8e8520af\") " pod="calico-system/calico-typha-7794f5684f-4dlw7" Dec 13 01:31:55.649318 kubelet[2608]: I1213 01:31:55.648587 2608 topology_manager.go:215] "Topology Admit Handler" podUID="92802b75-ad0d-47ff-bba5-b8e06dbe3017" podNamespace="calico-system" podName="calico-node-sswtz" Dec 13 01:31:55.663251 systemd[1]: Created slice kubepods-besteffort-pod92802b75_ad0d_47ff_bba5_b8e06dbe3017.slice - libcontainer container kubepods-besteffort-pod92802b75_ad0d_47ff_bba5_b8e06dbe3017.slice. Dec 13 01:31:55.723675 containerd[1460]: time="2024-12-13T01:31:55.723622177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7794f5684f-4dlw7,Uid:587acffd-b1c8-4044-a6cc-df7b8e8520af,Namespace:calico-system,Attempt:0,}" Dec 13 01:31:55.757901 kubelet[2608]: I1213 01:31:55.757765 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-var-lib-calico\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.760206 kubelet[2608]: I1213 01:31:55.760181 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-flexvol-driver-host\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761005 kubelet[2608]: I1213 01:31:55.760413 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-xtables-lock\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761005 kubelet[2608]: I1213 01:31:55.760464 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/92802b75-ad0d-47ff-bba5-b8e06dbe3017-node-certs\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761005 kubelet[2608]: I1213 01:31:55.760500 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-cni-net-dir\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761005 kubelet[2608]: I1213 01:31:55.760549 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-cni-log-dir\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761005 kubelet[2608]: I1213 01:31:55.760587 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-lib-modules\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761311 kubelet[2608]: I1213 01:31:55.760627 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdtfl\" (UniqueName: \"kubernetes.io/projected/92802b75-ad0d-47ff-bba5-b8e06dbe3017-kube-api-access-zdtfl\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761311 kubelet[2608]: I1213 01:31:55.760663 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-cni-bin-dir\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761311 kubelet[2608]: I1213 01:31:55.760700 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92802b75-ad0d-47ff-bba5-b8e06dbe3017-tigera-ca-bundle\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761311 kubelet[2608]: I1213 01:31:55.760734 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-var-run-calico\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.761311 kubelet[2608]: I1213 01:31:55.760771 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/92802b75-ad0d-47ff-bba5-b8e06dbe3017-policysync\") pod \"calico-node-sswtz\" (UID: \"92802b75-ad0d-47ff-bba5-b8e06dbe3017\") " pod="calico-system/calico-node-sswtz" Dec 13 01:31:55.784193 containerd[1460]: time="2024-12-13T01:31:55.783424429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:55.784193 containerd[1460]: time="2024-12-13T01:31:55.783511551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:55.784193 containerd[1460]: time="2024-12-13T01:31:55.783542620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.784193 containerd[1460]: time="2024-12-13T01:31:55.783660878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.787416 kubelet[2608]: I1213 01:31:55.786915 2608 topology_manager.go:215] "Topology Admit Handler" podUID="8f72c213-293a-4c61-89bb-f506676840e6" podNamespace="calico-system" podName="csi-node-driver-jkhgp" Dec 13 01:31:55.791999 kubelet[2608]: E1213 01:31:55.791957 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:31:55.842678 systemd[1]: Started cri-containerd-2f61a7cf7a892101b54e7374ff9568a7abe484c2a281879fcf8af734bed700b0.scope - libcontainer container 2f61a7cf7a892101b54e7374ff9568a7abe484c2a281879fcf8af734bed700b0. Dec 13 01:31:55.862777 kubelet[2608]: I1213 01:31:55.861641 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8f72c213-293a-4c61-89bb-f506676840e6-varrun\") pod \"csi-node-driver-jkhgp\" (UID: \"8f72c213-293a-4c61-89bb-f506676840e6\") " pod="calico-system/csi-node-driver-jkhgp" Dec 13 01:31:55.862777 kubelet[2608]: I1213 01:31:55.861766 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f72c213-293a-4c61-89bb-f506676840e6-registration-dir\") pod \"csi-node-driver-jkhgp\" (UID: \"8f72c213-293a-4c61-89bb-f506676840e6\") " pod="calico-system/csi-node-driver-jkhgp" Dec 13 01:31:55.862777 kubelet[2608]: I1213 01:31:55.861890 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f72c213-293a-4c61-89bb-f506676840e6-socket-dir\") pod \"csi-node-driver-jkhgp\" (UID: \"8f72c213-293a-4c61-89bb-f506676840e6\") " pod="calico-system/csi-node-driver-jkhgp" Dec 13 01:31:55.862777 kubelet[2608]: I1213 01:31:55.862021 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f72c213-293a-4c61-89bb-f506676840e6-kubelet-dir\") pod \"csi-node-driver-jkhgp\" (UID: \"8f72c213-293a-4c61-89bb-f506676840e6\") " pod="calico-system/csi-node-driver-jkhgp" Dec 13 01:31:55.862777 kubelet[2608]: I1213 01:31:55.862060 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8m8g\" (UniqueName: \"kubernetes.io/projected/8f72c213-293a-4c61-89bb-f506676840e6-kube-api-access-d8m8g\") pod \"csi-node-driver-jkhgp\" (UID: \"8f72c213-293a-4c61-89bb-f506676840e6\") " pod="calico-system/csi-node-driver-jkhgp" Dec 13 01:31:55.869736 kubelet[2608]: E1213 01:31:55.869695 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.870007 kubelet[2608]: W1213 01:31:55.869979 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.870130 kubelet[2608]: E1213 01:31:55.870113 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.873878 kubelet[2608]: E1213 01:31:55.873855 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.874106 kubelet[2608]: W1213 01:31:55.874085 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.874229 kubelet[2608]: E1213 01:31:55.874214 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.893479 kubelet[2608]: E1213 01:31:55.893442 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.893479 kubelet[2608]: W1213 01:31:55.893474 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.893479 kubelet[2608]: E1213 01:31:55.893505 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.967309 kubelet[2608]: E1213 01:31:55.964573 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.967309 kubelet[2608]: W1213 01:31:55.964604 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.967309 kubelet[2608]: E1213 01:31:55.964659 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.967309 kubelet[2608]: E1213 01:31:55.966352 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.967309 kubelet[2608]: W1213 01:31:55.966373 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.967309 kubelet[2608]: E1213 01:31:55.966406 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.968959 kubelet[2608]: E1213 01:31:55.967885 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.968959 kubelet[2608]: W1213 01:31:55.967907 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.968959 kubelet[2608]: E1213 01:31:55.968060 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.969232 kubelet[2608]: E1213 01:31:55.969012 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.969232 kubelet[2608]: W1213 01:31:55.969028 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.969348 kubelet[2608]: E1213 01:31:55.969311 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.971207 kubelet[2608]: E1213 01:31:55.970201 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.971207 kubelet[2608]: W1213 01:31:55.970225 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.971207 kubelet[2608]: E1213 01:31:55.970590 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.971440 kubelet[2608]: E1213 01:31:55.971332 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.971440 kubelet[2608]: W1213 01:31:55.971347 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.974467 kubelet[2608]: E1213 01:31:55.974227 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.975040 kubelet[2608]: E1213 01:31:55.975015 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.975040 kubelet[2608]: W1213 01:31:55.975031 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.975858 containerd[1460]: time="2024-12-13T01:31:55.975088117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sswtz,Uid:92802b75-ad0d-47ff-bba5-b8e06dbe3017,Namespace:calico-system,Attempt:0,}" Dec 13 01:31:55.976232 kubelet[2608]: E1213 01:31:55.976061 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.977344 kubelet[2608]: E1213 01:31:55.976855 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.977344 kubelet[2608]: W1213 01:31:55.976872 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.977847 kubelet[2608]: E1213 01:31:55.977820 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.980160 kubelet[2608]: E1213 01:31:55.980122 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.980160 kubelet[2608]: W1213 01:31:55.980150 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.980421 kubelet[2608]: E1213 01:31:55.980345 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.981026 kubelet[2608]: E1213 01:31:55.980989 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.981026 kubelet[2608]: W1213 01:31:55.981011 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.982039 kubelet[2608]: E1213 01:31:55.982005 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.982472 containerd[1460]: time="2024-12-13T01:31:55.982429984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7794f5684f-4dlw7,Uid:587acffd-b1c8-4044-a6cc-df7b8e8520af,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f61a7cf7a892101b54e7374ff9568a7abe484c2a281879fcf8af734bed700b0\"" Dec 13 01:31:55.983234 kubelet[2608]: E1213 01:31:55.983080 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.983586 kubelet[2608]: W1213 01:31:55.983395 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.984456 kubelet[2608]: E1213 01:31:55.984314 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.984456 kubelet[2608]: W1213 01:31:55.984333 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.985815 kubelet[2608]: E1213 01:31:55.985626 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.985815 kubelet[2608]: W1213 01:31:55.985646 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.986148 kubelet[2608]: E1213 01:31:55.986064 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.986569 kubelet[2608]: E1213 01:31:55.986416 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.986569 kubelet[2608]: W1213 01:31:55.986434 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.986701 kubelet[2608]: E1213 01:31:55.986588 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.986701 kubelet[2608]: E1213 01:31:55.986642 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.986701 kubelet[2608]: E1213 01:31:55.986660 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.988412 kubelet[2608]: E1213 01:31:55.988281 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.988412 kubelet[2608]: W1213 01:31:55.988302 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.988629 kubelet[2608]: E1213 01:31:55.988608 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.989365 kubelet[2608]: E1213 01:31:55.989251 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.989365 kubelet[2608]: W1213 01:31:55.989271 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.991178 kubelet[2608]: E1213 01:31:55.989900 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.991556 containerd[1460]: time="2024-12-13T01:31:55.991519457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:31:55.992201 kubelet[2608]: E1213 01:31:55.992182 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.992331 kubelet[2608]: W1213 01:31:55.992313 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.992970 kubelet[2608]: E1213 01:31:55.992675 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.993879 kubelet[2608]: E1213 01:31:55.993770 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.994678 kubelet[2608]: W1213 01:31:55.993976 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.994678 kubelet[2608]: E1213 01:31:55.994263 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.995411 kubelet[2608]: E1213 01:31:55.995217 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.995411 kubelet[2608]: W1213 01:31:55.995246 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.995963 kubelet[2608]: E1213 01:31:55.995779 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.996744 kubelet[2608]: E1213 01:31:55.996665 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.997766 kubelet[2608]: W1213 01:31:55.997464 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.997766 kubelet[2608]: E1213 01:31:55.997617 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:55.998619 kubelet[2608]: E1213 01:31:55.998600 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:55.999265 kubelet[2608]: W1213 01:31:55.998777 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:55.999774 kubelet[2608]: E1213 01:31:55.999384 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:56.001002 kubelet[2608]: E1213 01:31:56.000961 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:56.001227 kubelet[2608]: W1213 01:31:56.001013 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:56.001778 kubelet[2608]: E1213 01:31:56.001351 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:56.002383 kubelet[2608]: E1213 01:31:56.002151 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:56.002383 kubelet[2608]: W1213 01:31:56.002171 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:56.002383 kubelet[2608]: E1213 01:31:56.002246 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:56.003882 kubelet[2608]: E1213 01:31:56.003838 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:56.003882 kubelet[2608]: W1213 01:31:56.003882 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:56.004263 kubelet[2608]: E1213 01:31:56.004098 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:56.005404 kubelet[2608]: E1213 01:31:56.005377 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:56.005404 kubelet[2608]: W1213 01:31:56.005401 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:56.005549 kubelet[2608]: E1213 01:31:56.005455 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:56.033231 kubelet[2608]: E1213 01:31:56.033125 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:56.033231 kubelet[2608]: W1213 01:31:56.033154 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:56.033231 kubelet[2608]: E1213 01:31:56.033188 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:56.043395 containerd[1460]: time="2024-12-13T01:31:56.043212485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:56.043395 containerd[1460]: time="2024-12-13T01:31:56.043291854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:56.043395 containerd[1460]: time="2024-12-13T01:31:56.043336274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:56.043749 containerd[1460]: time="2024-12-13T01:31:56.043500324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:56.079203 systemd[1]: Started cri-containerd-cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f.scope - libcontainer container cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f. Dec 13 01:31:56.139743 containerd[1460]: time="2024-12-13T01:31:56.139680778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sswtz,Uid:92802b75-ad0d-47ff-bba5-b8e06dbe3017,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f\"" Dec 13 01:31:57.177467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127537946.mount: Deactivated successfully. Dec 13 01:31:57.571023 kubelet[2608]: E1213 01:31:57.570856 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:31:58.093975 containerd[1460]: time="2024-12-13T01:31:58.093905076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:58.095322 containerd[1460]: time="2024-12-13T01:31:58.095248483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:31:58.097058 containerd[1460]: time="2024-12-13T01:31:58.097018001Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:58.107180 containerd[1460]: time="2024-12-13T01:31:58.106750004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:58.107914 containerd[1460]: time="2024-12-13T01:31:58.107543534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.115815035s" Dec 13 01:31:58.107914 containerd[1460]: time="2024-12-13T01:31:58.107590501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:31:58.109839 containerd[1460]: time="2024-12-13T01:31:58.109802178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:31:58.130706 containerd[1460]: time="2024-12-13T01:31:58.130655722Z" level=info msg="CreateContainer within sandbox \"2f61a7cf7a892101b54e7374ff9568a7abe484c2a281879fcf8af734bed700b0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:31:58.152655 containerd[1460]: time="2024-12-13T01:31:58.152587320Z" level=info msg="CreateContainer within sandbox \"2f61a7cf7a892101b54e7374ff9568a7abe484c2a281879fcf8af734bed700b0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8f0d944d38b759e4c144707484b7c9f333d13fa83e4ccc04a59f9f1c9e8efd9f\"" Dec 13 01:31:58.153402 containerd[1460]: time="2024-12-13T01:31:58.153343870Z" level=info msg="StartContainer for \"8f0d944d38b759e4c144707484b7c9f333d13fa83e4ccc04a59f9f1c9e8efd9f\"" Dec 13 01:31:58.208237 systemd[1]: Started cri-containerd-8f0d944d38b759e4c144707484b7c9f333d13fa83e4ccc04a59f9f1c9e8efd9f.scope - libcontainer container 8f0d944d38b759e4c144707484b7c9f333d13fa83e4ccc04a59f9f1c9e8efd9f. Dec 13 01:31:58.273240 containerd[1460]: time="2024-12-13T01:31:58.273187213Z" level=info msg="StartContainer for \"8f0d944d38b759e4c144707484b7c9f333d13fa83e4ccc04a59f9f1c9e8efd9f\" returns successfully" Dec 13 01:31:58.669666 kubelet[2608]: E1213 01:31:58.669469 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.669666 kubelet[2608]: W1213 01:31:58.669496 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.669666 kubelet[2608]: E1213 01:31:58.669534 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.671079 kubelet[2608]: E1213 01:31:58.670622 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.671079 kubelet[2608]: W1213 01:31:58.670638 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.671079 kubelet[2608]: E1213 01:31:58.670665 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.671638 kubelet[2608]: E1213 01:31:58.671455 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.671638 kubelet[2608]: W1213 01:31:58.671474 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.671638 kubelet[2608]: E1213 01:31:58.671497 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.672400 kubelet[2608]: E1213 01:31:58.672085 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.672400 kubelet[2608]: W1213 01:31:58.672103 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.672400 kubelet[2608]: E1213 01:31:58.672126 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.672947 kubelet[2608]: E1213 01:31:58.672767 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.672947 kubelet[2608]: W1213 01:31:58.672786 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.672947 kubelet[2608]: E1213 01:31:58.672806 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.673491 kubelet[2608]: E1213 01:31:58.673336 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.673491 kubelet[2608]: W1213 01:31:58.673353 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.673491 kubelet[2608]: E1213 01:31:58.673373 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.674215 kubelet[2608]: E1213 01:31:58.673917 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.674215 kubelet[2608]: W1213 01:31:58.673959 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.674215 kubelet[2608]: E1213 01:31:58.673979 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.674569 kubelet[2608]: E1213 01:31:58.674458 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.674569 kubelet[2608]: W1213 01:31:58.674475 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.674569 kubelet[2608]: E1213 01:31:58.674495 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.675281 kubelet[2608]: E1213 01:31:58.675019 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.675281 kubelet[2608]: W1213 01:31:58.675035 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.675281 kubelet[2608]: E1213 01:31:58.675055 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.675668 kubelet[2608]: E1213 01:31:58.675530 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.675668 kubelet[2608]: W1213 01:31:58.675546 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.675668 kubelet[2608]: E1213 01:31:58.675566 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.677102 kubelet[2608]: E1213 01:31:58.676589 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.677102 kubelet[2608]: W1213 01:31:58.676610 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.677102 kubelet[2608]: E1213 01:31:58.676630 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.677509 kubelet[2608]: E1213 01:31:58.677491 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.677622 kubelet[2608]: W1213 01:31:58.677605 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.677824 kubelet[2608]: E1213 01:31:58.677703 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.678139 kubelet[2608]: E1213 01:31:58.678121 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.678400 kubelet[2608]: W1213 01:31:58.678248 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.678400 kubelet[2608]: E1213 01:31:58.678276 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.678744 kubelet[2608]: E1213 01:31:58.678720 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.678744 kubelet[2608]: W1213 01:31:58.678739 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.679073 kubelet[2608]: E1213 01:31:58.678759 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.679158 kubelet[2608]: E1213 01:31:58.679077 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.679158 kubelet[2608]: W1213 01:31:58.679091 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.679158 kubelet[2608]: E1213 01:31:58.679111 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.702226 kubelet[2608]: E1213 01:31:58.702177 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.702226 kubelet[2608]: W1213 01:31:58.702208 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.702546 kubelet[2608]: E1213 01:31:58.702240 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.702772 kubelet[2608]: E1213 01:31:58.702728 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.702772 kubelet[2608]: W1213 01:31:58.702749 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.702954 kubelet[2608]: E1213 01:31:58.702800 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.703272 kubelet[2608]: E1213 01:31:58.703240 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.703272 kubelet[2608]: W1213 01:31:58.703261 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.703445 kubelet[2608]: E1213 01:31:58.703291 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.703763 kubelet[2608]: E1213 01:31:58.703727 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.703763 kubelet[2608]: W1213 01:31:58.703755 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.703972 kubelet[2608]: E1213 01:31:58.703811 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.704274 kubelet[2608]: E1213 01:31:58.704251 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.704274 kubelet[2608]: W1213 01:31:58.704271 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.704399 kubelet[2608]: E1213 01:31:58.704312 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.704687 kubelet[2608]: E1213 01:31:58.704669 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.704687 kubelet[2608]: W1213 01:31:58.704686 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.704849 kubelet[2608]: E1213 01:31:58.704822 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.705226 kubelet[2608]: E1213 01:31:58.705198 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.705226 kubelet[2608]: W1213 01:31:58.705228 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.705461 kubelet[2608]: E1213 01:31:58.705374 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.705545 kubelet[2608]: E1213 01:31:58.705532 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.705598 kubelet[2608]: W1213 01:31:58.705544 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.705651 kubelet[2608]: E1213 01:31:58.705642 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.705994 kubelet[2608]: E1213 01:31:58.705972 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.705994 kubelet[2608]: W1213 01:31:58.705994 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.706126 kubelet[2608]: E1213 01:31:58.706024 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.706365 kubelet[2608]: E1213 01:31:58.706344 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.706365 kubelet[2608]: W1213 01:31:58.706363 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.706493 kubelet[2608]: E1213 01:31:58.706401 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.706756 kubelet[2608]: E1213 01:31:58.706736 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.706756 kubelet[2608]: W1213 01:31:58.706754 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.706884 kubelet[2608]: E1213 01:31:58.706793 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.707194 kubelet[2608]: E1213 01:31:58.707174 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.707194 kubelet[2608]: W1213 01:31:58.707192 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.707340 kubelet[2608]: E1213 01:31:58.707328 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.707564 kubelet[2608]: E1213 01:31:58.707544 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.707564 kubelet[2608]: W1213 01:31:58.707561 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.707679 kubelet[2608]: E1213 01:31:58.707588 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.707963 kubelet[2608]: E1213 01:31:58.707912 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.708049 kubelet[2608]: W1213 01:31:58.707988 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.708049 kubelet[2608]: E1213 01:31:58.708018 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.708373 kubelet[2608]: E1213 01:31:58.708353 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.708373 kubelet[2608]: W1213 01:31:58.708370 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.708503 kubelet[2608]: E1213 01:31:58.708408 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.709174 kubelet[2608]: E1213 01:31:58.709152 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.709174 kubelet[2608]: W1213 01:31:58.709171 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.709350 kubelet[2608]: E1213 01:31:58.709330 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.709594 kubelet[2608]: E1213 01:31:58.709573 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.709594 kubelet[2608]: W1213 01:31:58.709591 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.709715 kubelet[2608]: E1213 01:31:58.709611 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:58.710184 kubelet[2608]: E1213 01:31:58.710161 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:31:58.710184 kubelet[2608]: W1213 01:31:58.710180 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:31:58.710316 kubelet[2608]: E1213 01:31:58.710200 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:31:59.070506 containerd[1460]: time="2024-12-13T01:31:59.070021759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:59.072277 containerd[1460]: time="2024-12-13T01:31:59.072119702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:31:59.075850 containerd[1460]: time="2024-12-13T01:31:59.074157627Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:59.077397 containerd[1460]: time="2024-12-13T01:31:59.077310456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:59.078516 containerd[1460]: time="2024-12-13T01:31:59.078275047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 968.085737ms" Dec 13 01:31:59.078516 containerd[1460]: time="2024-12-13T01:31:59.078327131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:31:59.081533 containerd[1460]: time="2024-12-13T01:31:59.081466265Z" level=info msg="CreateContainer within sandbox \"cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:31:59.101680 containerd[1460]: time="2024-12-13T01:31:59.101600119Z" level=info msg="CreateContainer within sandbox \"cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584\"" Dec 13 01:31:59.104183 containerd[1460]: time="2024-12-13T01:31:59.104015816Z" level=info msg="StartContainer for \"51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584\"" Dec 13 01:31:59.180239 systemd[1]: Started cri-containerd-51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584.scope - libcontainer container 51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584. Dec 13 01:31:59.235431 containerd[1460]: time="2024-12-13T01:31:59.235371843Z" level=info msg="StartContainer for \"51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584\" returns successfully" Dec 13 01:31:59.256139 systemd[1]: cri-containerd-51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584.scope: Deactivated successfully. Dec 13 01:31:59.302882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584-rootfs.mount: Deactivated successfully. Dec 13 01:31:59.570256 kubelet[2608]: E1213 01:31:59.570210 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:31:59.673392 kubelet[2608]: I1213 01:31:59.671642 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:31:59.693621 kubelet[2608]: I1213 01:31:59.693572 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7794f5684f-4dlw7" podStartSLOduration=2.575386144 podStartE2EDuration="4.693517428s" podCreationTimestamp="2024-12-13 01:31:55 +0000 UTC" firstStartedPulling="2024-12-13 01:31:55.990340138 +0000 UTC m=+21.631187223" lastFinishedPulling="2024-12-13 01:31:58.108471411 +0000 UTC m=+23.749318507" observedRunningTime="2024-12-13 01:31:58.685666818 +0000 UTC m=+24.326513932" watchObservedRunningTime="2024-12-13 01:31:59.693517428 +0000 UTC m=+25.334364546" Dec 13 01:31:59.880413 containerd[1460]: time="2024-12-13T01:31:59.880313982Z" level=info msg="shim disconnected" id=51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584 namespace=k8s.io Dec 13 01:31:59.880413 containerd[1460]: time="2024-12-13T01:31:59.880397439Z" level=warning msg="cleaning up after shim disconnected" id=51c8cc06638b793d6500de2b1a7bd55f6a6f296f2a40bb0cabb6bcc2961e0584 namespace=k8s.io Dec 13 01:31:59.880413 containerd[1460]: time="2024-12-13T01:31:59.880413191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:00.676797 containerd[1460]: time="2024-12-13T01:32:00.676748319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:32:01.570472 kubelet[2608]: E1213 01:32:01.570409 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:32:03.570529 kubelet[2608]: E1213 01:32:03.570464 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:32:04.119391 kubelet[2608]: I1213 01:32:04.118862 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:32:05.084096 containerd[1460]: time="2024-12-13T01:32:05.084033458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:05.085433 containerd[1460]: time="2024-12-13T01:32:05.085361251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:32:05.086874 containerd[1460]: time="2024-12-13T01:32:05.086797760Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:05.090713 containerd[1460]: time="2024-12-13T01:32:05.090632276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:05.092630 containerd[1460]: time="2024-12-13T01:32:05.091858249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.415055108s" Dec 13 01:32:05.092630 containerd[1460]: time="2024-12-13T01:32:05.091904053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:32:05.094640 containerd[1460]: time="2024-12-13T01:32:05.094493474Z" level=info msg="CreateContainer within sandbox \"cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:32:05.114074 containerd[1460]: time="2024-12-13T01:32:05.114018702Z" level=info msg="CreateContainer within sandbox \"cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91\"" Dec 13 01:32:05.115247 containerd[1460]: time="2024-12-13T01:32:05.114856911Z" level=info msg="StartContainer for \"15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91\"" Dec 13 01:32:05.188172 systemd[1]: Started cri-containerd-15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91.scope - libcontainer container 15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91. Dec 13 01:32:05.239492 containerd[1460]: time="2024-12-13T01:32:05.239427947Z" level=info msg="StartContainer for \"15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91\" returns successfully" Dec 13 01:32:05.571693 kubelet[2608]: E1213 01:32:05.571168 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:32:06.138251 containerd[1460]: time="2024-12-13T01:32:06.138192302Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:32:06.141982 systemd[1]: cri-containerd-15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91.scope: Deactivated successfully. Dec 13 01:32:06.174650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91-rootfs.mount: Deactivated successfully. Dec 13 01:32:06.235176 kubelet[2608]: I1213 01:32:06.235106 2608 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:32:06.265701 kubelet[2608]: I1213 01:32:06.263473 2608 topology_manager.go:215] "Topology Admit Handler" podUID="d9ea8ed9-c62f-497b-8b5c-9f11233b2716" podNamespace="kube-system" podName="coredns-76f75df574-xzvvx" Dec 13 01:32:06.268414 kubelet[2608]: I1213 01:32:06.267852 2608 topology_manager.go:215] "Topology Admit Handler" podUID="ad72aa77-7913-4d0d-bc7f-8bd9b390797b" podNamespace="kube-system" podName="coredns-76f75df574-wsrph" Dec 13 01:32:06.271831 kubelet[2608]: W1213 01:32:06.271805 2608 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:32:06.272082 kubelet[2608]: E1213 01:32:06.272044 2608 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:32:06.278898 kubelet[2608]: I1213 01:32:06.277641 2608 topology_manager.go:215] "Topology Admit Handler" podUID="7d5cda69-3ee1-4238-8b54-30e176b7b3d7" podNamespace="calico-apiserver" podName="calico-apiserver-74c8c6c788-jkfpz" Dec 13 01:32:06.281713 systemd[1]: Created slice kubepods-burstable-podd9ea8ed9_c62f_497b_8b5c_9f11233b2716.slice - libcontainer container kubepods-burstable-podd9ea8ed9_c62f_497b_8b5c_9f11233b2716.slice. Dec 13 01:32:06.291522 kubelet[2608]: I1213 01:32:06.290967 2608 topology_manager.go:215] "Topology Admit Handler" podUID="7d47f189-a8fb-4943-9daa-99592014efac" podNamespace="calico-system" podName="calico-kube-controllers-d99b9d6cd-2nmkc" Dec 13 01:32:06.293185 kubelet[2608]: I1213 01:32:06.293035 2608 topology_manager.go:215] "Topology Admit Handler" podUID="6a6f094a-3181-4104-9078-a9c6ee707b6a" podNamespace="calico-apiserver" podName="calico-apiserver-74c8c6c788-m2ckw" Dec 13 01:32:06.293950 kubelet[2608]: W1213 01:32:06.293465 2608 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:32:06.293950 kubelet[2608]: E1213 01:32:06.293506 2608 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' and this object Dec 13 01:32:06.300737 systemd[1]: Created slice kubepods-burstable-podad72aa77_7913_4d0d_bc7f_8bd9b390797b.slice - libcontainer container kubepods-burstable-podad72aa77_7913_4d0d_bc7f_8bd9b390797b.slice. Dec 13 01:32:06.315372 systemd[1]: Created slice kubepods-besteffort-pod7d5cda69_3ee1_4238_8b54_30e176b7b3d7.slice - libcontainer container kubepods-besteffort-pod7d5cda69_3ee1_4238_8b54_30e176b7b3d7.slice. Dec 13 01:32:06.331201 systemd[1]: Created slice kubepods-besteffort-pod7d47f189_a8fb_4943_9daa_99592014efac.slice - libcontainer container kubepods-besteffort-pod7d47f189_a8fb_4943_9daa_99592014efac.slice. Dec 13 01:32:06.377462 kubelet[2608]: I1213 01:32:06.364138 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9ea8ed9-c62f-497b-8b5c-9f11233b2716-config-volume\") pod \"coredns-76f75df574-xzvvx\" (UID: \"d9ea8ed9-c62f-497b-8b5c-9f11233b2716\") " pod="kube-system/coredns-76f75df574-xzvvx" Dec 13 01:32:06.377462 kubelet[2608]: I1213 01:32:06.364195 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxvv\" (UniqueName: \"kubernetes.io/projected/d9ea8ed9-c62f-497b-8b5c-9f11233b2716-kube-api-access-zdxvv\") pod \"coredns-76f75df574-xzvvx\" (UID: \"d9ea8ed9-c62f-497b-8b5c-9f11233b2716\") " pod="kube-system/coredns-76f75df574-xzvvx" Dec 13 01:32:06.377462 kubelet[2608]: I1213 01:32:06.364232 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgqfq\" (UniqueName: \"kubernetes.io/projected/7d47f189-a8fb-4943-9daa-99592014efac-kube-api-access-pgqfq\") pod \"calico-kube-controllers-d99b9d6cd-2nmkc\" (UID: \"7d47f189-a8fb-4943-9daa-99592014efac\") " pod="calico-system/calico-kube-controllers-d99b9d6cd-2nmkc" Dec 13 01:32:06.377462 kubelet[2608]: I1213 01:32:06.364273 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d5cda69-3ee1-4238-8b54-30e176b7b3d7-calico-apiserver-certs\") pod \"calico-apiserver-74c8c6c788-jkfpz\" (UID: \"7d5cda69-3ee1-4238-8b54-30e176b7b3d7\") " pod="calico-apiserver/calico-apiserver-74c8c6c788-jkfpz" Dec 13 01:32:06.377462 kubelet[2608]: I1213 01:32:06.364314 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad72aa77-7913-4d0d-bc7f-8bd9b390797b-config-volume\") pod \"coredns-76f75df574-wsrph\" (UID: \"ad72aa77-7913-4d0d-bc7f-8bd9b390797b\") " pod="kube-system/coredns-76f75df574-wsrph" Dec 13 01:32:06.342785 systemd[1]: Created slice kubepods-besteffort-pod6a6f094a_3181_4104_9078_a9c6ee707b6a.slice - libcontainer container kubepods-besteffort-pod6a6f094a_3181_4104_9078_a9c6ee707b6a.slice. Dec 13 01:32:06.378105 kubelet[2608]: I1213 01:32:06.364350 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2x6\" (UniqueName: \"kubernetes.io/projected/ad72aa77-7913-4d0d-bc7f-8bd9b390797b-kube-api-access-ts2x6\") pod \"coredns-76f75df574-wsrph\" (UID: \"ad72aa77-7913-4d0d-bc7f-8bd9b390797b\") " pod="kube-system/coredns-76f75df574-wsrph" Dec 13 01:32:06.378105 kubelet[2608]: I1213 01:32:06.364391 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4p5h\" (UniqueName: \"kubernetes.io/projected/7d5cda69-3ee1-4238-8b54-30e176b7b3d7-kube-api-access-h4p5h\") pod \"calico-apiserver-74c8c6c788-jkfpz\" (UID: \"7d5cda69-3ee1-4238-8b54-30e176b7b3d7\") " pod="calico-apiserver/calico-apiserver-74c8c6c788-jkfpz" Dec 13 01:32:06.378105 kubelet[2608]: I1213 01:32:06.364429 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a6f094a-3181-4104-9078-a9c6ee707b6a-calico-apiserver-certs\") pod \"calico-apiserver-74c8c6c788-m2ckw\" (UID: \"6a6f094a-3181-4104-9078-a9c6ee707b6a\") " pod="calico-apiserver/calico-apiserver-74c8c6c788-m2ckw" Dec 13 01:32:06.378105 kubelet[2608]: I1213 01:32:06.364462 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d47f189-a8fb-4943-9daa-99592014efac-tigera-ca-bundle\") pod \"calico-kube-controllers-d99b9d6cd-2nmkc\" (UID: \"7d47f189-a8fb-4943-9daa-99592014efac\") " pod="calico-system/calico-kube-controllers-d99b9d6cd-2nmkc" Dec 13 01:32:06.378105 kubelet[2608]: I1213 01:32:06.364517 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhwm\" (UniqueName: \"kubernetes.io/projected/6a6f094a-3181-4104-9078-a9c6ee707b6a-kube-api-access-6zhwm\") pod \"calico-apiserver-74c8c6c788-m2ckw\" (UID: \"6a6f094a-3181-4104-9078-a9c6ee707b6a\") " pod="calico-apiserver/calico-apiserver-74c8c6c788-m2ckw" Dec 13 01:32:06.687699 containerd[1460]: time="2024-12-13T01:32:06.687642486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d99b9d6cd-2nmkc,Uid:7d47f189-a8fb-4943-9daa-99592014efac,Namespace:calico-system,Attempt:0,}" Dec 13 01:32:06.942212 containerd[1460]: time="2024-12-13T01:32:06.941215214Z" level=info msg="shim disconnected" id=15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91 namespace=k8s.io Dec 13 01:32:06.942212 containerd[1460]: time="2024-12-13T01:32:06.941287529Z" level=warning msg="cleaning up after shim disconnected" id=15f2dd20a911239a59e401cd841c56a960fc84f6e0e208a31e2295a922162d91 namespace=k8s.io Dec 13 01:32:06.942212 containerd[1460]: time="2024-12-13T01:32:06.941300716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:06.966221 containerd[1460]: time="2024-12-13T01:32:06.965786953Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:32:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:32:07.025376 containerd[1460]: time="2024-12-13T01:32:07.025309067Z" level=error msg="Failed to destroy network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.025804 containerd[1460]: time="2024-12-13T01:32:07.025757328Z" level=error msg="encountered an error cleaning up failed sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.026010 containerd[1460]: time="2024-12-13T01:32:07.025837980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d99b9d6cd-2nmkc,Uid:7d47f189-a8fb-4943-9daa-99592014efac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.026273 kubelet[2608]: E1213 01:32:07.026223 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.026737 kubelet[2608]: E1213 01:32:07.026310 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d99b9d6cd-2nmkc" Dec 13 01:32:07.026737 kubelet[2608]: E1213 01:32:07.026347 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d99b9d6cd-2nmkc" Dec 13 01:32:07.026737 kubelet[2608]: E1213 01:32:07.026451 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d99b9d6cd-2nmkc_calico-system(7d47f189-a8fb-4943-9daa-99592014efac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d99b9d6cd-2nmkc_calico-system(7d47f189-a8fb-4943-9daa-99592014efac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d99b9d6cd-2nmkc" podUID="7d47f189-a8fb-4943-9daa-99592014efac" Dec 13 01:32:07.466913 kubelet[2608]: E1213 01:32:07.466850 2608 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.467127 kubelet[2608]: E1213 01:32:07.467000 2608 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9ea8ed9-c62f-497b-8b5c-9f11233b2716-config-volume podName:d9ea8ed9-c62f-497b-8b5c-9f11233b2716 nodeName:}" failed. No retries permitted until 2024-12-13 01:32:07.966969788 +0000 UTC m=+33.607816864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d9ea8ed9-c62f-497b-8b5c-9f11233b2716-config-volume") pod "coredns-76f75df574-xzvvx" (UID: "d9ea8ed9-c62f-497b-8b5c-9f11233b2716") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.467381 kubelet[2608]: E1213 01:32:07.466850 2608 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.467381 kubelet[2608]: E1213 01:32:07.467350 2608 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad72aa77-7913-4d0d-bc7f-8bd9b390797b-config-volume podName:ad72aa77-7913-4d0d-bc7f-8bd9b390797b nodeName:}" failed. No retries permitted until 2024-12-13 01:32:07.967327136 +0000 UTC m=+33.608174229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ad72aa77-7913-4d0d-bc7f-8bd9b390797b-config-volume") pod "coredns-76f75df574-wsrph" (UID: "ad72aa77-7913-4d0d-bc7f-8bd9b390797b") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.501039 kubelet[2608]: E1213 01:32:07.500987 2608 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.501039 kubelet[2608]: E1213 01:32:07.501035 2608 projected.go:200] Error preparing data for projected volume kube-api-access-6zhwm for pod calico-apiserver/calico-apiserver-74c8c6c788-m2ckw: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.501298 kubelet[2608]: E1213 01:32:07.501127 2608 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a6f094a-3181-4104-9078-a9c6ee707b6a-kube-api-access-6zhwm podName:6a6f094a-3181-4104-9078-a9c6ee707b6a nodeName:}" failed. No retries permitted until 2024-12-13 01:32:08.001098291 +0000 UTC m=+33.641945367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6zhwm" (UniqueName: "kubernetes.io/projected/6a6f094a-3181-4104-9078-a9c6ee707b6a-kube-api-access-6zhwm") pod "calico-apiserver-74c8c6c788-m2ckw" (UID: "6a6f094a-3181-4104-9078-a9c6ee707b6a") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.508341 kubelet[2608]: E1213 01:32:07.508299 2608 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.508341 kubelet[2608]: E1213 01:32:07.508340 2608 projected.go:200] Error preparing data for projected volume kube-api-access-h4p5h for pod calico-apiserver/calico-apiserver-74c8c6c788-jkfpz: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.508545 kubelet[2608]: E1213 01:32:07.508436 2608 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d5cda69-3ee1-4238-8b54-30e176b7b3d7-kube-api-access-h4p5h podName:7d5cda69-3ee1-4238-8b54-30e176b7b3d7 nodeName:}" failed. No retries permitted until 2024-12-13 01:32:08.00839379 +0000 UTC m=+33.649240883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h4p5h" (UniqueName: "kubernetes.io/projected/7d5cda69-3ee1-4238-8b54-30e176b7b3d7-kube-api-access-h4p5h") pod "calico-apiserver-74c8c6c788-jkfpz" (UID: "7d5cda69-3ee1-4238-8b54-30e176b7b3d7") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:07.577877 systemd[1]: Created slice kubepods-besteffort-pod8f72c213_293a_4c61_89bb_f506676840e6.slice - libcontainer container kubepods-besteffort-pod8f72c213_293a_4c61_89bb_f506676840e6.slice. Dec 13 01:32:07.581315 containerd[1460]: time="2024-12-13T01:32:07.581269398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkhgp,Uid:8f72c213-293a-4c61-89bb-f506676840e6,Namespace:calico-system,Attempt:0,}" Dec 13 01:32:07.663100 containerd[1460]: time="2024-12-13T01:32:07.663033951Z" level=error msg="Failed to destroy network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.666332 containerd[1460]: time="2024-12-13T01:32:07.666276568Z" level=error msg="encountered an error cleaning up failed sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.666492 containerd[1460]: time="2024-12-13T01:32:07.666379754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkhgp,Uid:8f72c213-293a-4c61-89bb-f506676840e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.666725 kubelet[2608]: E1213 01:32:07.666694 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.666820 kubelet[2608]: E1213 01:32:07.666771 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jkhgp" Dec 13 01:32:07.666820 kubelet[2608]: E1213 01:32:07.666806 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jkhgp" Dec 13 01:32:07.666956 kubelet[2608]: E1213 01:32:07.666887 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jkhgp_calico-system(8f72c213-293a-4c61-89bb-f506676840e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jkhgp_calico-system(8f72c213-293a-4c61-89bb-f506676840e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:32:07.668852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3-shm.mount: Deactivated successfully. Dec 13 01:32:07.704348 containerd[1460]: time="2024-12-13T01:32:07.704233095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:32:07.707387 kubelet[2608]: I1213 01:32:07.706535 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:07.709890 containerd[1460]: time="2024-12-13T01:32:07.709842923Z" level=info msg="StopPodSandbox for \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\"" Dec 13 01:32:07.710523 kubelet[2608]: I1213 01:32:07.710463 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:07.712497 containerd[1460]: time="2024-12-13T01:32:07.712111895Z" level=info msg="Ensure that sandbox eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3 in task-service has been cleanup successfully" Dec 13 01:32:07.712765 containerd[1460]: time="2024-12-13T01:32:07.712734474Z" level=info msg="StopPodSandbox for \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\"" Dec 13 01:32:07.713231 containerd[1460]: time="2024-12-13T01:32:07.713131821Z" level=info msg="Ensure that sandbox 0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b in task-service has been cleanup successfully" Dec 13 01:32:07.786251 containerd[1460]: time="2024-12-13T01:32:07.785853706Z" level=error msg="StopPodSandbox for \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\" failed" error="failed to destroy network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.786405 kubelet[2608]: E1213 01:32:07.786160 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:07.786405 kubelet[2608]: E1213 01:32:07.786251 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3"} Dec 13 01:32:07.786405 kubelet[2608]: E1213 01:32:07.786302 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f72c213-293a-4c61-89bb-f506676840e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:32:07.786405 kubelet[2608]: E1213 01:32:07.786345 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f72c213-293a-4c61-89bb-f506676840e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jkhgp" podUID="8f72c213-293a-4c61-89bb-f506676840e6" Dec 13 01:32:07.791715 containerd[1460]: time="2024-12-13T01:32:07.791631232Z" level=error msg="StopPodSandbox for \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\" failed" error="failed to destroy network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:07.791981 kubelet[2608]: E1213 01:32:07.791947 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:07.792123 kubelet[2608]: E1213 01:32:07.792003 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b"} Dec 13 01:32:07.792123 kubelet[2608]: E1213 01:32:07.792114 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d47f189-a8fb-4943-9daa-99592014efac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:32:07.792268 kubelet[2608]: E1213 01:32:07.792174 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d47f189-a8fb-4943-9daa-99592014efac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d99b9d6cd-2nmkc" podUID="7d47f189-a8fb-4943-9daa-99592014efac" Dec 13 01:32:08.091958 containerd[1460]: time="2024-12-13T01:32:08.091809814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xzvvx,Uid:d9ea8ed9-c62f-497b-8b5c-9f11233b2716,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:08.107177 containerd[1460]: time="2024-12-13T01:32:08.107103241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wsrph,Uid:ad72aa77-7913-4d0d-bc7f-8bd9b390797b,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:08.185189 containerd[1460]: time="2024-12-13T01:32:08.184243179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-jkfpz,Uid:7d5cda69-3ee1-4238-8b54-30e176b7b3d7,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:32:08.188203 containerd[1460]: time="2024-12-13T01:32:08.188160195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-m2ckw,Uid:6a6f094a-3181-4104-9078-a9c6ee707b6a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:32:08.328253 containerd[1460]: time="2024-12-13T01:32:08.328119769Z" level=error msg="Failed to destroy network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.329353 containerd[1460]: time="2024-12-13T01:32:08.329241890Z" level=error msg="encountered an error cleaning up failed sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.329353 containerd[1460]: time="2024-12-13T01:32:08.329326720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xzvvx,Uid:d9ea8ed9-c62f-497b-8b5c-9f11233b2716,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.329697 kubelet[2608]: E1213 01:32:08.329635 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.330673 kubelet[2608]: E1213 01:32:08.329712 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xzvvx" Dec 13 01:32:08.330673 kubelet[2608]: E1213 01:32:08.329747 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xzvvx" Dec 13 01:32:08.330673 kubelet[2608]: E1213 01:32:08.329825 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xzvvx_kube-system(d9ea8ed9-c62f-497b-8b5c-9f11233b2716)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xzvvx_kube-system(d9ea8ed9-c62f-497b-8b5c-9f11233b2716)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xzvvx" podUID="d9ea8ed9-c62f-497b-8b5c-9f11233b2716" Dec 13 01:32:08.343216 containerd[1460]: time="2024-12-13T01:32:08.342608943Z" level=error msg="Failed to destroy network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.343216 containerd[1460]: time="2024-12-13T01:32:08.343149989Z" level=error msg="encountered an error cleaning up failed sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.345890 containerd[1460]: time="2024-12-13T01:32:08.343222547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wsrph,Uid:ad72aa77-7913-4d0d-bc7f-8bd9b390797b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.346123 kubelet[2608]: E1213 01:32:08.343798 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.346123 kubelet[2608]: E1213 01:32:08.343870 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wsrph" Dec 13 01:32:08.346123 kubelet[2608]: E1213 01:32:08.343913 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wsrph" Dec 13 01:32:08.346330 kubelet[2608]: E1213 01:32:08.344498 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-wsrph_kube-system(ad72aa77-7913-4d0d-bc7f-8bd9b390797b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-wsrph_kube-system(ad72aa77-7913-4d0d-bc7f-8bd9b390797b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wsrph" podUID="ad72aa77-7913-4d0d-bc7f-8bd9b390797b" Dec 13 01:32:08.397515 containerd[1460]: time="2024-12-13T01:32:08.396766038Z" level=error msg="Failed to destroy network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.397515 containerd[1460]: time="2024-12-13T01:32:08.397248918Z" level=error msg="encountered an error cleaning up failed sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.397515 containerd[1460]: time="2024-12-13T01:32:08.397323238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-jkfpz,Uid:7d5cda69-3ee1-4238-8b54-30e176b7b3d7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.397865 kubelet[2608]: E1213 01:32:08.397671 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.397865 kubelet[2608]: E1213 01:32:08.397738 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c8c6c788-jkfpz" Dec 13 01:32:08.397865 kubelet[2608]: E1213 01:32:08.397780 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c8c6c788-jkfpz" Dec 13 01:32:08.398067 kubelet[2608]: E1213 01:32:08.397858 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74c8c6c788-jkfpz_calico-apiserver(7d5cda69-3ee1-4238-8b54-30e176b7b3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74c8c6c788-jkfpz_calico-apiserver(7d5cda69-3ee1-4238-8b54-30e176b7b3d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c8c6c788-jkfpz" podUID="7d5cda69-3ee1-4238-8b54-30e176b7b3d7" Dec 13 01:32:08.411445 containerd[1460]: time="2024-12-13T01:32:08.411374853Z" level=error msg="Failed to destroy network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.411808 containerd[1460]: time="2024-12-13T01:32:08.411757094Z" level=error msg="encountered an error cleaning up failed sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.411914 containerd[1460]: time="2024-12-13T01:32:08.411835321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-m2ckw,Uid:6a6f094a-3181-4104-9078-a9c6ee707b6a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.412335 kubelet[2608]: E1213 01:32:08.412291 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.412457 kubelet[2608]: E1213 01:32:08.412371 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c8c6c788-m2ckw" Dec 13 01:32:08.412457 kubelet[2608]: E1213 01:32:08.412421 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c8c6c788-m2ckw" Dec 13 01:32:08.412868 kubelet[2608]: E1213 01:32:08.412819 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74c8c6c788-m2ckw_calico-apiserver(6a6f094a-3181-4104-9078-a9c6ee707b6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74c8c6c788-m2ckw_calico-apiserver(6a6f094a-3181-4104-9078-a9c6ee707b6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c8c6c788-m2ckw" podUID="6a6f094a-3181-4104-9078-a9c6ee707b6a" Dec 13 01:32:08.714790 kubelet[2608]: I1213 01:32:08.714709 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:08.718013 containerd[1460]: time="2024-12-13T01:32:08.716211327Z" level=info msg="StopPodSandbox for \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\"" Dec 13 01:32:08.718013 containerd[1460]: time="2024-12-13T01:32:08.716474765Z" level=info msg="Ensure that sandbox c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e in task-service has been cleanup successfully" Dec 13 01:32:08.718615 kubelet[2608]: I1213 01:32:08.716406 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:08.729234 containerd[1460]: time="2024-12-13T01:32:08.729181893Z" level=info msg="StopPodSandbox for \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\"" Dec 13 01:32:08.729946 containerd[1460]: time="2024-12-13T01:32:08.729451087Z" level=info msg="Ensure that sandbox 826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352 in task-service has been cleanup successfully" Dec 13 01:32:08.735188 kubelet[2608]: I1213 01:32:08.735155 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:08.740334 containerd[1460]: time="2024-12-13T01:32:08.740279553Z" level=info msg="StopPodSandbox for \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\"" Dec 13 01:32:08.741406 containerd[1460]: time="2024-12-13T01:32:08.741371074Z" level=info msg="Ensure that sandbox 6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888 in task-service has been cleanup successfully" Dec 13 01:32:08.752354 kubelet[2608]: I1213 01:32:08.751106 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:08.753619 containerd[1460]: time="2024-12-13T01:32:08.753360487Z" level=info msg="StopPodSandbox for \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\"" Dec 13 01:32:08.756838 containerd[1460]: time="2024-12-13T01:32:08.755986617Z" level=info msg="Ensure that sandbox 2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7 in task-service has been cleanup successfully" Dec 13 01:32:08.853319 containerd[1460]: time="2024-12-13T01:32:08.853254006Z" level=error msg="StopPodSandbox for \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\" failed" error="failed to destroy network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.853953 kubelet[2608]: E1213 01:32:08.853903 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:08.854106 kubelet[2608]: E1213 01:32:08.854001 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e"} Dec 13 01:32:08.854106 kubelet[2608]: E1213 01:32:08.854089 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d5cda69-3ee1-4238-8b54-30e176b7b3d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:32:08.854293 kubelet[2608]: E1213 01:32:08.854158 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d5cda69-3ee1-4238-8b54-30e176b7b3d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c8c6c788-jkfpz" podUID="7d5cda69-3ee1-4238-8b54-30e176b7b3d7" Dec 13 01:32:08.855650 containerd[1460]: time="2024-12-13T01:32:08.855597600Z" level=error msg="StopPodSandbox for \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\" failed" error="failed to destroy network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.855904 kubelet[2608]: E1213 01:32:08.855876 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:08.856086 kubelet[2608]: E1213 01:32:08.856007 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888"} Dec 13 01:32:08.856151 kubelet[2608]: E1213 01:32:08.856114 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9ea8ed9-c62f-497b-8b5c-9f11233b2716\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:32:08.856800 kubelet[2608]: E1213 01:32:08.856628 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9ea8ed9-c62f-497b-8b5c-9f11233b2716\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xzvvx" podUID="d9ea8ed9-c62f-497b-8b5c-9f11233b2716" Dec 13 01:32:08.867527 containerd[1460]: time="2024-12-13T01:32:08.866905785Z" level=error msg="StopPodSandbox for \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\" failed" error="failed to destroy network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.867689 kubelet[2608]: E1213 01:32:08.867210 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:08.867689 kubelet[2608]: E1213 01:32:08.867265 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7"} Dec 13 01:32:08.867689 kubelet[2608]: E1213 01:32:08.867322 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad72aa77-7913-4d0d-bc7f-8bd9b390797b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:32:08.867689 kubelet[2608]: E1213 01:32:08.867429 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad72aa77-7913-4d0d-bc7f-8bd9b390797b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wsrph" podUID="ad72aa77-7913-4d0d-bc7f-8bd9b390797b" Dec 13 01:32:08.870871 containerd[1460]: time="2024-12-13T01:32:08.870821646Z" level=error msg="StopPodSandbox for \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\" failed" error="failed to destroy network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:32:08.871334 kubelet[2608]: E1213 01:32:08.871312 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:08.871631 kubelet[2608]: E1213 01:32:08.871481 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352"} Dec 13 01:32:08.871631 kubelet[2608]: E1213 01:32:08.871547 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a6f094a-3181-4104-9078-a9c6ee707b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:32:08.871631 kubelet[2608]: E1213 01:32:08.871594 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a6f094a-3181-4104-9078-a9c6ee707b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c8c6c788-m2ckw" podUID="6a6f094a-3181-4104-9078-a9c6ee707b6a" Dec 13 01:32:09.179147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352-shm.mount: Deactivated successfully. Dec 13 01:32:09.179830 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e-shm.mount: Deactivated successfully. Dec 13 01:32:09.180085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7-shm.mount: Deactivated successfully. Dec 13 01:32:09.180178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888-shm.mount: Deactivated successfully. Dec 13 01:32:14.414872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307198589.mount: Deactivated successfully. Dec 13 01:32:14.469773 containerd[1460]: time="2024-12-13T01:32:14.469679669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.471131 containerd[1460]: time="2024-12-13T01:32:14.471067735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:32:14.472689 containerd[1460]: time="2024-12-13T01:32:14.472622016Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.477053 containerd[1460]: time="2024-12-13T01:32:14.476972806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.478514 containerd[1460]: time="2024-12-13T01:32:14.477772552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.773486873s" Dec 13 01:32:14.478514 containerd[1460]: time="2024-12-13T01:32:14.477818876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:32:14.504329 containerd[1460]: time="2024-12-13T01:32:14.504274607Z" level=info msg="CreateContainer within sandbox \"cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:32:14.529491 containerd[1460]: time="2024-12-13T01:32:14.529423571Z" level=info msg="CreateContainer within sandbox \"cf31469f097863bafe5a5fc56138276b07a540999213d7e0f46abf5bd3b98c1f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5ae71bd103587ae127e6d8b3fb30bf76090b66ac2bc15daa0b3ac204317fae98\"" Dec 13 01:32:14.531370 containerd[1460]: time="2024-12-13T01:32:14.530311488Z" level=info msg="StartContainer for \"5ae71bd103587ae127e6d8b3fb30bf76090b66ac2bc15daa0b3ac204317fae98\"" Dec 13 01:32:14.572179 systemd[1]: Started cri-containerd-5ae71bd103587ae127e6d8b3fb30bf76090b66ac2bc15daa0b3ac204317fae98.scope - libcontainer container 5ae71bd103587ae127e6d8b3fb30bf76090b66ac2bc15daa0b3ac204317fae98. Dec 13 01:32:14.618994 containerd[1460]: time="2024-12-13T01:32:14.618914306Z" level=info msg="StartContainer for \"5ae71bd103587ae127e6d8b3fb30bf76090b66ac2bc15daa0b3ac204317fae98\" returns successfully" Dec 13 01:32:14.722216 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:32:14.722382 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:32:14.808117 kubelet[2608]: I1213 01:32:14.808058 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-sswtz" podStartSLOduration=1.474067686 podStartE2EDuration="19.807999365s" podCreationTimestamp="2024-12-13 01:31:55 +0000 UTC" firstStartedPulling="2024-12-13 01:31:56.144278811 +0000 UTC m=+21.785125899" lastFinishedPulling="2024-12-13 01:32:14.478210488 +0000 UTC m=+40.119057578" observedRunningTime="2024-12-13 01:32:14.804837218 +0000 UTC m=+40.445684319" watchObservedRunningTime="2024-12-13 01:32:14.807999365 +0000 UTC m=+40.448846463" Dec 13 01:32:15.806769 systemd[1]: run-containerd-runc-k8s.io-5ae71bd103587ae127e6d8b3fb30bf76090b66ac2bc15daa0b3ac204317fae98-runc.xuGKSI.mount: Deactivated successfully. Dec 13 01:32:16.644061 kernel: bpftool[3874]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:32:16.979827 systemd-networkd[1372]: vxlan.calico: Link UP Dec 13 01:32:16.979843 systemd-networkd[1372]: vxlan.calico: Gained carrier Dec 13 01:32:18.292248 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Dec 13 01:32:20.573171 containerd[1460]: time="2024-12-13T01:32:20.571990309Z" level=info msg="StopPodSandbox for \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\"" Dec 13 01:32:20.573171 containerd[1460]: time="2024-12-13T01:32:20.572710509Z" level=info msg="StopPodSandbox for \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\"" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.670 [INFO][3984] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.670 [INFO][3984] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" iface="eth0" netns="/var/run/netns/cni-eb5068c0-ce53-c916-589f-f661f9925c1d" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.671 [INFO][3984] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" iface="eth0" netns="/var/run/netns/cni-eb5068c0-ce53-c916-589f-f661f9925c1d" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.671 [INFO][3984] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" iface="eth0" netns="/var/run/netns/cni-eb5068c0-ce53-c916-589f-f661f9925c1d" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.672 [INFO][3984] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.672 [INFO][3984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.727 [INFO][4000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.727 [INFO][4000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.727 [INFO][4000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.736 [WARNING][4000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.736 [INFO][4000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.737 [INFO][4000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:20.743306 containerd[1460]: 2024-12-13 01:32:20.741 [INFO][3984] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:20.744526 containerd[1460]: time="2024-12-13T01:32:20.743515699Z" level=info msg="TearDown network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\" successfully" Dec 13 01:32:20.744526 containerd[1460]: time="2024-12-13T01:32:20.743551197Z" level=info msg="StopPodSandbox for \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\" returns successfully" Dec 13 01:32:20.747695 containerd[1460]: time="2024-12-13T01:32:20.747267041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkhgp,Uid:8f72c213-293a-4c61-89bb-f506676840e6,Namespace:calico-system,Attempt:1,}" Dec 13 01:32:20.749875 systemd[1]: run-netns-cni\x2deb5068c0\x2dce53\x2dc916\x2d589f\x2df661f9925c1d.mount: Deactivated successfully. Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.658 [INFO][3983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.660 [INFO][3983] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" iface="eth0" netns="/var/run/netns/cni-edee7c92-50f5-510f-acde-30085a0fc502" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.660 [INFO][3983] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" iface="eth0" netns="/var/run/netns/cni-edee7c92-50f5-510f-acde-30085a0fc502" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.662 [INFO][3983] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" iface="eth0" netns="/var/run/netns/cni-edee7c92-50f5-510f-acde-30085a0fc502" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.662 [INFO][3983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.662 [INFO][3983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.729 [INFO][3996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.729 [INFO][3996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.737 [INFO][3996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.757 [WARNING][3996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.757 [INFO][3996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.759 [INFO][3996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:20.766255 containerd[1460]: 2024-12-13 01:32:20.761 [INFO][3983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:20.766255 containerd[1460]: time="2024-12-13T01:32:20.764210308Z" level=info msg="TearDown network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\" successfully" Dec 13 01:32:20.766255 containerd[1460]: time="2024-12-13T01:32:20.764242041Z" level=info msg="StopPodSandbox for \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\" returns successfully" Dec 13 01:32:20.767507 containerd[1460]: time="2024-12-13T01:32:20.767461248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wsrph,Uid:ad72aa77-7913-4d0d-bc7f-8bd9b390797b,Namespace:kube-system,Attempt:1,}" Dec 13 01:32:20.770623 systemd[1]: run-netns-cni\x2dedee7c92\x2d50f5\x2d510f\x2dacde\x2d30085a0fc502.mount: Deactivated successfully. Dec 13 01:32:20.975142 systemd-networkd[1372]: cali85c9a628f0f: Link UP Dec 13 01:32:20.979795 systemd-networkd[1372]: cali85c9a628f0f: Gained carrier Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.867 [INFO][4020] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0 coredns-76f75df574- kube-system ad72aa77-7913-4d0d-bc7f-8bd9b390797b 759 0 2024-12-13 01:31:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal coredns-76f75df574-wsrph eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85c9a628f0f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.867 [INFO][4020] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.918 [INFO][4033] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" HandleID="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.936 [INFO][4033] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" HandleID="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003198e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", "pod":"coredns-76f75df574-wsrph", "timestamp":"2024-12-13 01:32:20.918453835 +0000 UTC"}, Hostname:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.936 [INFO][4033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.936 [INFO][4033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.936 [INFO][4033] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.938 [INFO][4033] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.945 [INFO][4033] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.949 [INFO][4033] ipam/ipam.go 489: Trying affinity for 192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.951 [INFO][4033] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.953 [INFO][4033] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.953 [INFO][4033] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.955 [INFO][4033] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.959 [INFO][4033] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.965 [INFO][4033] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.129/26] block=192.168.93.128/26 handle="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.965 [INFO][4033] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.129/26] handle="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.965 [INFO][4033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:21.018110 containerd[1460]: 2024-12-13 01:32:20.965 [INFO][4033] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.129/26] IPv6=[] ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" HandleID="k8s-pod-network.d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:21.020321 containerd[1460]: 2024-12-13 01:32:20.968 [INFO][4020] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ad72aa77-7913-4d0d-bc7f-8bd9b390797b", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-wsrph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85c9a628f0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:21.020321 containerd[1460]: 2024-12-13 01:32:20.969 [INFO][4020] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.129/32] ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:21.020321 containerd[1460]: 2024-12-13 01:32:20.970 [INFO][4020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85c9a628f0f ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:21.020321 containerd[1460]: 2024-12-13 01:32:20.975 [INFO][4020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:21.020321 containerd[1460]: 2024-12-13 01:32:20.976 [INFO][4020] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ad72aa77-7913-4d0d-bc7f-8bd9b390797b", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d", Pod:"coredns-76f75df574-wsrph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85c9a628f0f", MAC:"82:fe:e1:4b:9a:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:21.020321 containerd[1460]: 2024-12-13 01:32:21.003 [INFO][4020] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d" Namespace="kube-system" Pod="coredns-76f75df574-wsrph" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:21.060413 systemd-networkd[1372]: cali5c82ea3ffbf: Link UP Dec 13 01:32:21.062300 systemd-networkd[1372]: cali5c82ea3ffbf: Gained carrier Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.870 [INFO][4010] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0 csi-node-driver- calico-system 8f72c213-293a-4c61-89bb-f506676840e6 760 0 2024-12-13 01:31:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal csi-node-driver-jkhgp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c82ea3ffbf [] []}} ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.870 [INFO][4010] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.920 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" HandleID="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.936 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" HandleID="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", "pod":"csi-node-driver-jkhgp", "timestamp":"2024-12-13 01:32:20.920699228 +0000 UTC"}, Hostname:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.936 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.965 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.965 [INFO][4037] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.968 [INFO][4037] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:20.991 [INFO][4037] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.001 [INFO][4037] ipam/ipam.go 489: Trying affinity for 192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.012 [INFO][4037] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.017 [INFO][4037] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.017 [INFO][4037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.022 [INFO][4037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.031 [INFO][4037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.044 [INFO][4037] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.130/26] block=192.168.93.128/26 handle="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.044 [INFO][4037] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.130/26] handle="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.044 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:21.091204 containerd[1460]: 2024-12-13 01:32:21.044 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.130/26] IPv6=[] ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" HandleID="k8s-pod-network.09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:21.094792 containerd[1460]: 2024-12-13 01:32:21.049 [INFO][4010] cni-plugin/k8s.go 386: Populated endpoint ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f72c213-293a-4c61-89bb-f506676840e6", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-jkhgp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c82ea3ffbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:21.094792 containerd[1460]: 2024-12-13 01:32:21.050 [INFO][4010] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.130/32] ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:21.094792 containerd[1460]: 2024-12-13 01:32:21.050 [INFO][4010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c82ea3ffbf ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:21.094792 containerd[1460]: 2024-12-13 01:32:21.062 [INFO][4010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:21.094792 containerd[1460]: 2024-12-13 01:32:21.064 [INFO][4010] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f72c213-293a-4c61-89bb-f506676840e6", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f", Pod:"csi-node-driver-jkhgp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c82ea3ffbf", MAC:"3a:06:d7:72:02:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:21.094792 containerd[1460]: 2024-12-13 01:32:21.087 [INFO][4010] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f" Namespace="calico-system" Pod="csi-node-driver-jkhgp" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:21.100728 containerd[1460]: time="2024-12-13T01:32:21.100269608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:21.100728 containerd[1460]: time="2024-12-13T01:32:21.100384879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:21.100728 containerd[1460]: time="2024-12-13T01:32:21.100407976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:21.100728 containerd[1460]: time="2024-12-13T01:32:21.100543012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:21.141550 systemd[1]: Started cri-containerd-d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d.scope - libcontainer container d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d. Dec 13 01:32:21.157472 containerd[1460]: time="2024-12-13T01:32:21.157044848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:21.157472 containerd[1460]: time="2024-12-13T01:32:21.157137166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:21.157472 containerd[1460]: time="2024-12-13T01:32:21.157159215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:21.157472 containerd[1460]: time="2024-12-13T01:32:21.157284671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:21.193168 systemd[1]: Started cri-containerd-09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f.scope - libcontainer container 09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f. Dec 13 01:32:21.225511 containerd[1460]: time="2024-12-13T01:32:21.225212845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wsrph,Uid:ad72aa77-7913-4d0d-bc7f-8bd9b390797b,Namespace:kube-system,Attempt:1,} returns sandbox id \"d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d\"" Dec 13 01:32:21.243336 containerd[1460]: time="2024-12-13T01:32:21.243124137Z" level=info msg="CreateContainer within sandbox \"d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:21.250008 containerd[1460]: time="2024-12-13T01:32:21.249952137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkhgp,Uid:8f72c213-293a-4c61-89bb-f506676840e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f\"" Dec 13 01:32:21.252825 containerd[1460]: time="2024-12-13T01:32:21.252665964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:32:21.265942 containerd[1460]: time="2024-12-13T01:32:21.265858693Z" level=info msg="CreateContainer within sandbox \"d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5eb90a81add8164e533d132a41572447ba5e9f882deec44e10b52d0a497d27e\"" Dec 13 01:32:21.266641 containerd[1460]: time="2024-12-13T01:32:21.266604908Z" level=info msg="StartContainer for \"e5eb90a81add8164e533d132a41572447ba5e9f882deec44e10b52d0a497d27e\"" Dec 13 01:32:21.303230 systemd[1]: Started cri-containerd-e5eb90a81add8164e533d132a41572447ba5e9f882deec44e10b52d0a497d27e.scope - libcontainer container e5eb90a81add8164e533d132a41572447ba5e9f882deec44e10b52d0a497d27e. Dec 13 01:32:21.344532 containerd[1460]: time="2024-12-13T01:32:21.344348056Z" level=info msg="StartContainer for \"e5eb90a81add8164e533d132a41572447ba5e9f882deec44e10b52d0a497d27e\" returns successfully" Dec 13 01:32:21.571555 containerd[1460]: time="2024-12-13T01:32:21.571307987Z" level=info msg="StopPodSandbox for \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\"" Dec 13 01:32:21.572571 containerd[1460]: time="2024-12-13T01:32:21.572080506Z" level=info msg="StopPodSandbox for \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\"" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.665 [INFO][4217] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.668 [INFO][4217] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" iface="eth0" netns="/var/run/netns/cni-889a8248-18af-75d3-7cf0-c37632894db5" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.671 [INFO][4217] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" iface="eth0" netns="/var/run/netns/cni-889a8248-18af-75d3-7cf0-c37632894db5" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.672 [INFO][4217] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" iface="eth0" netns="/var/run/netns/cni-889a8248-18af-75d3-7cf0-c37632894db5" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.673 [INFO][4217] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.673 [INFO][4217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.717 [INFO][4235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.718 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.718 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.728 [WARNING][4235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.728 [INFO][4235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.731 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:21.735561 containerd[1460]: 2024-12-13 01:32:21.733 [INFO][4217] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:21.738022 containerd[1460]: time="2024-12-13T01:32:21.737091430Z" level=info msg="TearDown network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\" successfully" Dec 13 01:32:21.738022 containerd[1460]: time="2024-12-13T01:32:21.737162027Z" level=info msg="StopPodSandbox for \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\" returns successfully" Dec 13 01:32:21.738977 containerd[1460]: time="2024-12-13T01:32:21.738407262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d99b9d6cd-2nmkc,Uid:7d47f189-a8fb-4943-9daa-99592014efac,Namespace:calico-system,Attempt:1,}" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.666 [INFO][4224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.666 [INFO][4224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" iface="eth0" netns="/var/run/netns/cni-eb105ea5-a6a5-eb99-151c-c92093385255" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.670 [INFO][4224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" iface="eth0" netns="/var/run/netns/cni-eb105ea5-a6a5-eb99-151c-c92093385255" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.671 [INFO][4224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" iface="eth0" netns="/var/run/netns/cni-eb105ea5-a6a5-eb99-151c-c92093385255" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.671 [INFO][4224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.671 [INFO][4224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.721 [INFO][4234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.722 [INFO][4234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.731 [INFO][4234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.743 [WARNING][4234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.743 [INFO][4234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.748 [INFO][4234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:21.759499 containerd[1460]: 2024-12-13 01:32:21.754 [INFO][4224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:21.763512 containerd[1460]: time="2024-12-13T01:32:21.759656289Z" level=info msg="TearDown network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\" successfully" Dec 13 01:32:21.763512 containerd[1460]: time="2024-12-13T01:32:21.759692439Z" level=info msg="StopPodSandbox for \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\" returns successfully" Dec 13 01:32:21.761645 systemd[1]: run-netns-cni\x2d889a8248\x2d18af\x2d75d3\x2d7cf0\x2dc37632894db5.mount: Deactivated successfully. Dec 13 01:32:21.770793 containerd[1460]: time="2024-12-13T01:32:21.770738060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-m2ckw,Uid:6a6f094a-3181-4104-9078-a9c6ee707b6a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:32:21.773449 systemd[1]: run-netns-cni\x2deb105ea5\x2da6a5\x2deb99\x2d151c\x2dc92093385255.mount: Deactivated successfully. Dec 13 01:32:21.837462 kubelet[2608]: I1213 01:32:21.836836 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wsrph" podStartSLOduration=34.836776678 podStartE2EDuration="34.836776678s" podCreationTimestamp="2024-12-13 01:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:21.836303603 +0000 UTC m=+47.477150701" watchObservedRunningTime="2024-12-13 01:32:21.836776678 +0000 UTC m=+47.477623777" Dec 13 01:32:22.005410 systemd-networkd[1372]: cali85c9a628f0f: Gained IPv6LL Dec 13 01:32:22.063023 systemd-networkd[1372]: calib9e145472ec: Link UP Dec 13 01:32:22.065427 systemd-networkd[1372]: calib9e145472ec: Gained carrier Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:21.915 [INFO][4247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0 calico-kube-controllers-d99b9d6cd- calico-system 7d47f189-a8fb-4943-9daa-99592014efac 775 0 2024-12-13 01:31:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d99b9d6cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal calico-kube-controllers-d99b9d6cd-2nmkc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib9e145472ec [] []}} ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:21.915 [INFO][4247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:21.990 [INFO][4273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" HandleID="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.007 [INFO][4273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" HandleID="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000112190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", "pod":"calico-kube-controllers-d99b9d6cd-2nmkc", "timestamp":"2024-12-13 01:32:21.990178497 +0000 UTC"}, Hostname:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.007 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.007 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.007 [INFO][4273] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.011 [INFO][4273] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.023 [INFO][4273] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.029 [INFO][4273] ipam/ipam.go 489: Trying affinity for 192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.032 [INFO][4273] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.036 [INFO][4273] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.036 [INFO][4273] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.039 [INFO][4273] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68 Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.044 [INFO][4273] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.053 [INFO][4273] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.131/26] block=192.168.93.128/26 handle="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.053 [INFO][4273] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.131/26] handle="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.053 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:22.099773 containerd[1460]: 2024-12-13 01:32:22.053 [INFO][4273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.131/26] IPv6=[] ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" HandleID="k8s-pod-network.e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:22.102464 containerd[1460]: 2024-12-13 01:32:22.056 [INFO][4247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0", GenerateName:"calico-kube-controllers-d99b9d6cd-", Namespace:"calico-system", SelfLink:"", UID:"7d47f189-a8fb-4943-9daa-99592014efac", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d99b9d6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-d99b9d6cd-2nmkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9e145472ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:22.102464 containerd[1460]: 2024-12-13 01:32:22.057 [INFO][4247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.131/32] ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:22.102464 containerd[1460]: 2024-12-13 01:32:22.057 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9e145472ec ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:22.102464 containerd[1460]: 2024-12-13 01:32:22.063 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:22.102464 containerd[1460]: 2024-12-13 01:32:22.064 [INFO][4247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0", GenerateName:"calico-kube-controllers-d99b9d6cd-", Namespace:"calico-system", SelfLink:"", UID:"7d47f189-a8fb-4943-9daa-99592014efac", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d99b9d6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68", Pod:"calico-kube-controllers-d99b9d6cd-2nmkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9e145472ec", MAC:"9e:d3:2e:19:8e:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:22.102464 containerd[1460]: 2024-12-13 01:32:22.094 [INFO][4247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68" Namespace="calico-system" Pod="calico-kube-controllers-d99b9d6cd-2nmkc" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:22.147500 systemd-networkd[1372]: calia2fe581d983: Link UP Dec 13 01:32:22.149517 systemd-networkd[1372]: calia2fe581d983: Gained carrier Dec 13 01:32:22.197751 systemd-networkd[1372]: cali5c82ea3ffbf: Gained IPv6LL Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:21.942 [INFO][4260] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0 calico-apiserver-74c8c6c788- calico-apiserver 6a6f094a-3181-4104-9078-a9c6ee707b6a 776 0 2024-12-13 01:31:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c8c6c788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal calico-apiserver-74c8c6c788-m2ckw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia2fe581d983 [] []}} ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:21.943 [INFO][4260] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.007 [INFO][4280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" HandleID="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.027 [INFO][4280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" HandleID="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003196e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", "pod":"calico-apiserver-74c8c6c788-m2ckw", "timestamp":"2024-12-13 01:32:22.007237805 +0000 UTC"}, Hostname:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.027 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.053 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.054 [INFO][4280] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.057 [INFO][4280] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.069 [INFO][4280] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.080 [INFO][4280] ipam/ipam.go 489: Trying affinity for 192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.088 [INFO][4280] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.092 [INFO][4280] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.092 [INFO][4280] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.097 [INFO][4280] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52 Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.114 [INFO][4280] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.133 [INFO][4280] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.132/26] block=192.168.93.128/26 handle="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.133 [INFO][4280] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.132/26] handle="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.133 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:22.199187 containerd[1460]: 2024-12-13 01:32:22.133 [INFO][4280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.132/26] IPv6=[] ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" HandleID="k8s-pod-network.757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:22.200336 containerd[1460]: 2024-12-13 01:32:22.138 [INFO][4260] cni-plugin/k8s.go 386: Populated endpoint ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a6f094a-3181-4104-9078-a9c6ee707b6a", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-74c8c6c788-m2ckw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2fe581d983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:22.200336 containerd[1460]: 2024-12-13 01:32:22.138 [INFO][4260] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.132/32] ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:22.200336 containerd[1460]: 2024-12-13 01:32:22.139 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2fe581d983 ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:22.200336 containerd[1460]: 2024-12-13 01:32:22.152 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:22.200336 containerd[1460]: 2024-12-13 01:32:22.158 [INFO][4260] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a6f094a-3181-4104-9078-a9c6ee707b6a", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52", Pod:"calico-apiserver-74c8c6c788-m2ckw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2fe581d983", MAC:"a2:46:de:a7:4f:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:22.200336 containerd[1460]: 2024-12-13 01:32:22.190 [INFO][4260] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-m2ckw" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:22.213966 containerd[1460]: time="2024-12-13T01:32:22.203386016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:22.213966 containerd[1460]: time="2024-12-13T01:32:22.203497713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:22.213966 containerd[1460]: time="2024-12-13T01:32:22.203591590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:22.213966 containerd[1460]: time="2024-12-13T01:32:22.203992002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:22.247268 systemd[1]: Started cri-containerd-e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68.scope - libcontainer container e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68. Dec 13 01:32:22.294824 containerd[1460]: time="2024-12-13T01:32:22.294630733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:22.295104 containerd[1460]: time="2024-12-13T01:32:22.295062849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:22.296075 containerd[1460]: time="2024-12-13T01:32:22.296018604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:22.296428 containerd[1460]: time="2024-12-13T01:32:22.296383598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:22.346406 systemd[1]: Started cri-containerd-757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52.scope - libcontainer container 757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52. Dec 13 01:32:22.478889 containerd[1460]: time="2024-12-13T01:32:22.478837906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d99b9d6cd-2nmkc,Uid:7d47f189-a8fb-4943-9daa-99592014efac,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68\"" Dec 13 01:32:22.542137 containerd[1460]: time="2024-12-13T01:32:22.542090978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-m2ckw,Uid:6a6f094a-3181-4104-9078-a9c6ee707b6a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52\"" Dec 13 01:32:22.572441 containerd[1460]: time="2024-12-13T01:32:22.572335824Z" level=info msg="StopPodSandbox for \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\"" Dec 13 01:32:22.646168 containerd[1460]: time="2024-12-13T01:32:22.646107997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.647990 containerd[1460]: time="2024-12-13T01:32:22.647791877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:32:22.649428 containerd[1460]: time="2024-12-13T01:32:22.649387037Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.655575 containerd[1460]: time="2024-12-13T01:32:22.653996661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.658293 containerd[1460]: time="2024-12-13T01:32:22.658231406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.405477578s" Dec 13 01:32:22.658397 containerd[1460]: time="2024-12-13T01:32:22.658310621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:32:22.660184 containerd[1460]: time="2024-12-13T01:32:22.660149181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:32:22.663583 containerd[1460]: time="2024-12-13T01:32:22.663513497Z" level=info msg="CreateContainer within sandbox \"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:32:22.694433 containerd[1460]: time="2024-12-13T01:32:22.694379745Z" level=info msg="CreateContainer within sandbox \"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5bd21c4954c4381927cac65cac637a6965de4ae0be95da772342a6ac8122cd76\"" Dec 13 01:32:22.695485 containerd[1460]: time="2024-12-13T01:32:22.695452145Z" level=info msg="StartContainer for \"5bd21c4954c4381927cac65cac637a6965de4ae0be95da772342a6ac8122cd76\"" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.655 [INFO][4419] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.656 [INFO][4419] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" iface="eth0" netns="/var/run/netns/cni-f23b1fbc-59e0-8991-5161-7d6271fc7436" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.657 [INFO][4419] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" iface="eth0" netns="/var/run/netns/cni-f23b1fbc-59e0-8991-5161-7d6271fc7436" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.657 [INFO][4419] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" iface="eth0" netns="/var/run/netns/cni-f23b1fbc-59e0-8991-5161-7d6271fc7436" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.657 [INFO][4419] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.657 [INFO][4419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.690 [INFO][4426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.691 [INFO][4426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.691 [INFO][4426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.703 [WARNING][4426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.704 [INFO][4426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.710 [INFO][4426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:22.715102 containerd[1460]: 2024-12-13 01:32:22.712 [INFO][4419] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:22.717829 containerd[1460]: time="2024-12-13T01:32:22.715299118Z" level=info msg="TearDown network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\" successfully" Dec 13 01:32:22.717829 containerd[1460]: time="2024-12-13T01:32:22.715340015Z" level=info msg="StopPodSandbox for \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\" returns successfully" Dec 13 01:32:22.718998 containerd[1460]: time="2024-12-13T01:32:22.718456909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-jkfpz,Uid:7d5cda69-3ee1-4238-8b54-30e176b7b3d7,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:32:22.747278 systemd[1]: Started cri-containerd-5bd21c4954c4381927cac65cac637a6965de4ae0be95da772342a6ac8122cd76.scope - libcontainer container 5bd21c4954c4381927cac65cac637a6965de4ae0be95da772342a6ac8122cd76. Dec 13 01:32:22.767813 systemd[1]: run-netns-cni\x2df23b1fbc\x2d59e0\x2d8991\x2d5161\x2d7d6271fc7436.mount: Deactivated successfully. Dec 13 01:32:22.838116 containerd[1460]: time="2024-12-13T01:32:22.837572372Z" level=info msg="StartContainer for \"5bd21c4954c4381927cac65cac637a6965de4ae0be95da772342a6ac8122cd76\" returns successfully" Dec 13 01:32:22.945739 systemd-networkd[1372]: calieab9ade064e: Link UP Dec 13 01:32:22.948472 systemd-networkd[1372]: calieab9ade064e: Gained carrier Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.834 [INFO][4452] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0 calico-apiserver-74c8c6c788- calico-apiserver 7d5cda69-3ee1-4238-8b54-30e176b7b3d7 794 0 2024-12-13 01:31:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c8c6c788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal calico-apiserver-74c8c6c788-jkfpz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieab9ade064e [] []}} ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.834 [INFO][4452] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.883 [INFO][4481] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" HandleID="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.895 [INFO][4481] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" HandleID="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003054c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", "pod":"calico-apiserver-74c8c6c788-jkfpz", "timestamp":"2024-12-13 01:32:22.883698344 +0000 UTC"}, Hostname:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.895 [INFO][4481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.895 [INFO][4481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.895 [INFO][4481] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.897 [INFO][4481] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.902 [INFO][4481] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.907 [INFO][4481] ipam/ipam.go 489: Trying affinity for 192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.909 [INFO][4481] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.914 [INFO][4481] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.914 [INFO][4481] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.917 [INFO][4481] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518 Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.925 [INFO][4481] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.936 [INFO][4481] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.133/26] block=192.168.93.128/26 handle="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.936 [INFO][4481] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.133/26] handle="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.936 [INFO][4481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:22.971613 containerd[1460]: 2024-12-13 01:32:22.936 [INFO][4481] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.133/26] IPv6=[] ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" HandleID="k8s-pod-network.62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.975244 containerd[1460]: 2024-12-13 01:32:22.940 [INFO][4452] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d5cda69-3ee1-4238-8b54-30e176b7b3d7", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-74c8c6c788-jkfpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieab9ade064e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:22.975244 containerd[1460]: 2024-12-13 01:32:22.940 [INFO][4452] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.133/32] ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.975244 containerd[1460]: 2024-12-13 01:32:22.940 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieab9ade064e ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.975244 containerd[1460]: 2024-12-13 01:32:22.947 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:22.975244 containerd[1460]: 2024-12-13 01:32:22.950 [INFO][4452] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d5cda69-3ee1-4238-8b54-30e176b7b3d7", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518", Pod:"calico-apiserver-74c8c6c788-jkfpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieab9ade064e", MAC:"4a:1b:70:2f:13:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:22.975244 containerd[1460]: 2024-12-13 01:32:22.968 [INFO][4452] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518" Namespace="calico-apiserver" Pod="calico-apiserver-74c8c6c788-jkfpz" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:23.026437 containerd[1460]: time="2024-12-13T01:32:23.026172706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:23.026727 containerd[1460]: time="2024-12-13T01:32:23.026340055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:23.026727 containerd[1460]: time="2024-12-13T01:32:23.026365015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.026727 containerd[1460]: time="2024-12-13T01:32:23.026486332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.065166 systemd[1]: Started cri-containerd-62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518.scope - libcontainer container 62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518. Dec 13 01:32:23.125568 containerd[1460]: time="2024-12-13T01:32:23.125482887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c8c6c788-jkfpz,Uid:7d5cda69-3ee1-4238-8b54-30e176b7b3d7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518\"" Dec 13 01:32:23.571966 containerd[1460]: time="2024-12-13T01:32:23.571660648Z" level=info msg="StopPodSandbox for \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\"" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.627 [INFO][4555] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.628 [INFO][4555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" iface="eth0" netns="/var/run/netns/cni-206e21a4-b5ca-6ee0-56c0-73cda6c146ed" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.630 [INFO][4555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" iface="eth0" netns="/var/run/netns/cni-206e21a4-b5ca-6ee0-56c0-73cda6c146ed" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.631 [INFO][4555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" iface="eth0" netns="/var/run/netns/cni-206e21a4-b5ca-6ee0-56c0-73cda6c146ed" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.631 [INFO][4555] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.631 [INFO][4555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.660 [INFO][4562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.660 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.660 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.687 [WARNING][4562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.688 [INFO][4562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.698 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:23.708353 containerd[1460]: 2024-12-13 01:32:23.704 [INFO][4555] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:23.712700 containerd[1460]: time="2024-12-13T01:32:23.709061774Z" level=info msg="TearDown network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\" successfully" Dec 13 01:32:23.712700 containerd[1460]: time="2024-12-13T01:32:23.709100780Z" level=info msg="StopPodSandbox for \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\" returns successfully" Dec 13 01:32:23.713305 containerd[1460]: time="2024-12-13T01:32:23.713222668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xzvvx,Uid:d9ea8ed9-c62f-497b-8b5c-9f11233b2716,Namespace:kube-system,Attempt:1,}" Dec 13 01:32:23.716797 systemd[1]: run-netns-cni\x2d206e21a4\x2db5ca\x2d6ee0\x2d56c0\x2d73cda6c146ed.mount: Deactivated successfully. Dec 13 01:32:23.732626 systemd-networkd[1372]: calib9e145472ec: Gained IPv6LL Dec 13 01:32:23.924324 systemd-networkd[1372]: calia2fe581d983: Gained IPv6LL Dec 13 01:32:24.061763 systemd-networkd[1372]: cali464edb5108a: Link UP Dec 13 01:32:24.063768 systemd-networkd[1372]: cali464edb5108a: Gained carrier Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:23.859 [INFO][4570] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0 coredns-76f75df574- kube-system d9ea8ed9-c62f-497b-8b5c-9f11233b2716 806 0 2024-12-13 01:31:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal coredns-76f75df574-xzvvx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali464edb5108a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:23.862 [INFO][4570] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:23.984 [INFO][4582] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" HandleID="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.001 [INFO][4582] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" HandleID="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000520970), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", "pod":"coredns-76f75df574-xzvvx", "timestamp":"2024-12-13 01:32:23.984347679 +0000 UTC"}, Hostname:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.001 [INFO][4582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.001 [INFO][4582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.002 [INFO][4582] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal' Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.004 [INFO][4582] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.015 [INFO][4582] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.022 [INFO][4582] ipam/ipam.go 489: Trying affinity for 192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.025 [INFO][4582] ipam/ipam.go 155: Attempting to load block cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.029 [INFO][4582] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.029 [INFO][4582] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.031 [INFO][4582] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4 Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.038 [INFO][4582] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.049 [INFO][4582] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.93.134/26] block=192.168.93.128/26 handle="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.050 [INFO][4582] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.134/26] handle="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" host="ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal" Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.050 [INFO][4582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:24.091416 containerd[1460]: 2024-12-13 01:32:24.050 [INFO][4582] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.134/26] IPv6=[] ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" HandleID="k8s-pod-network.914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:24.096743 containerd[1460]: 2024-12-13 01:32:24.054 [INFO][4570] cni-plugin/k8s.go 386: Populated endpoint ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d9ea8ed9-c62f-497b-8b5c-9f11233b2716", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-xzvvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali464edb5108a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:24.096743 containerd[1460]: 2024-12-13 01:32:24.054 [INFO][4570] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.93.134/32] ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:24.096743 containerd[1460]: 2024-12-13 01:32:24.054 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali464edb5108a ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:24.096743 containerd[1460]: 2024-12-13 01:32:24.065 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:24.096743 containerd[1460]: 2024-12-13 01:32:24.065 [INFO][4570] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d9ea8ed9-c62f-497b-8b5c-9f11233b2716", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4", Pod:"coredns-76f75df574-xzvvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali464edb5108a", MAC:"9e:a6:fb:eb:bc:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:24.096743 containerd[1460]: 2024-12-13 01:32:24.083 [INFO][4570] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4" Namespace="kube-system" Pod="coredns-76f75df574-xzvvx" WorkloadEndpoint="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:24.213660 containerd[1460]: time="2024-12-13T01:32:24.213271542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:24.213660 containerd[1460]: time="2024-12-13T01:32:24.213344152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:24.213660 containerd[1460]: time="2024-12-13T01:32:24.213382167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:24.213660 containerd[1460]: time="2024-12-13T01:32:24.213517797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:24.262236 systemd[1]: Started cri-containerd-914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4.scope - libcontainer container 914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4. Dec 13 01:32:24.381338 containerd[1460]: time="2024-12-13T01:32:24.381269750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xzvvx,Uid:d9ea8ed9-c62f-497b-8b5c-9f11233b2716,Namespace:kube-system,Attempt:1,} returns sandbox id \"914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4\"" Dec 13 01:32:24.387717 containerd[1460]: time="2024-12-13T01:32:24.387652347Z" level=info msg="CreateContainer within sandbox \"914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:24.415659 containerd[1460]: time="2024-12-13T01:32:24.415489773Z" level=info msg="CreateContainer within sandbox \"914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7511dd99f4213bd7799e15777b1707d4ea437803d67fae0536aaaa074343a5a7\"" Dec 13 01:32:24.417921 containerd[1460]: time="2024-12-13T01:32:24.417839469Z" level=info msg="StartContainer for \"7511dd99f4213bd7799e15777b1707d4ea437803d67fae0536aaaa074343a5a7\"" Dec 13 01:32:24.477201 systemd[1]: Started cri-containerd-7511dd99f4213bd7799e15777b1707d4ea437803d67fae0536aaaa074343a5a7.scope - libcontainer container 7511dd99f4213bd7799e15777b1707d4ea437803d67fae0536aaaa074343a5a7. Dec 13 01:32:24.557867 containerd[1460]: time="2024-12-13T01:32:24.557793132Z" level=info msg="StartContainer for \"7511dd99f4213bd7799e15777b1707d4ea437803d67fae0536aaaa074343a5a7\" returns successfully" Dec 13 01:32:24.692326 systemd-networkd[1372]: calieab9ade064e: Gained IPv6LL Dec 13 01:32:24.784360 systemd[1]: Started sshd@7-10.128.0.13:22-147.75.109.163:60996.service - OpenSSH per-connection server daemon (147.75.109.163:60996). Dec 13 01:32:24.888219 kubelet[2608]: I1213 01:32:24.886835 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xzvvx" podStartSLOduration=37.886773237 podStartE2EDuration="37.886773237s" podCreationTimestamp="2024-12-13 01:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:24.885677059 +0000 UTC m=+50.526524162" watchObservedRunningTime="2024-12-13 01:32:24.886773237 +0000 UTC m=+50.527620338" Dec 13 01:32:25.114277 sshd[4686]: Accepted publickey for core from 147.75.109.163 port 60996 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:25.117653 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:25.126849 systemd-logind[1449]: New session 8 of user core. Dec 13 01:32:25.132650 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:32:25.479191 containerd[1460]: time="2024-12-13T01:32:25.478961789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:25.481071 containerd[1460]: time="2024-12-13T01:32:25.480989466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:32:25.482877 containerd[1460]: time="2024-12-13T01:32:25.482800782Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:25.486551 containerd[1460]: time="2024-12-13T01:32:25.486474180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:25.487670 containerd[1460]: time="2024-12-13T01:32:25.487618831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.827317378s" Dec 13 01:32:25.487773 containerd[1460]: time="2024-12-13T01:32:25.487674408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:32:25.489841 containerd[1460]: time="2024-12-13T01:32:25.489611745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:32:25.496892 sshd[4686]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:25.522725 containerd[1460]: time="2024-12-13T01:32:25.517351426Z" level=info msg="CreateContainer within sandbox \"e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:32:25.522128 systemd[1]: sshd@7-10.128.0.13:22-147.75.109.163:60996.service: Deactivated successfully. Dec 13 01:32:25.527570 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:32:25.530359 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:32:25.533423 systemd-logind[1449]: Removed session 8. Dec 13 01:32:25.550604 containerd[1460]: time="2024-12-13T01:32:25.550546140Z" level=info msg="CreateContainer within sandbox \"e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c\"" Dec 13 01:32:25.553000 containerd[1460]: time="2024-12-13T01:32:25.552953191Z" level=info msg="StartContainer for \"9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c\"" Dec 13 01:32:25.597177 systemd[1]: Started cri-containerd-9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c.scope - libcontainer container 9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c. Dec 13 01:32:25.661213 containerd[1460]: time="2024-12-13T01:32:25.661131301Z" level=info msg="StartContainer for \"9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c\" returns successfully" Dec 13 01:32:25.908653 kubelet[2608]: I1213 01:32:25.908590 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d99b9d6cd-2nmkc" podStartSLOduration=27.90202251 podStartE2EDuration="30.908520996s" podCreationTimestamp="2024-12-13 01:31:55 +0000 UTC" firstStartedPulling="2024-12-13 01:32:22.48162303 +0000 UTC m=+48.122470121" lastFinishedPulling="2024-12-13 01:32:25.488121518 +0000 UTC m=+51.128968607" observedRunningTime="2024-12-13 01:32:25.907390569 +0000 UTC m=+51.548237671" watchObservedRunningTime="2024-12-13 01:32:25.908520996 +0000 UTC m=+51.549368124" Dec 13 01:32:26.038729 systemd-networkd[1372]: cali464edb5108a: Gained IPv6LL Dec 13 01:32:26.922053 systemd[1]: run-containerd-runc-k8s.io-9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c-runc.dh8zKC.mount: Deactivated successfully. Dec 13 01:32:27.926225 containerd[1460]: time="2024-12-13T01:32:27.926149606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:27.927599 containerd[1460]: time="2024-12-13T01:32:27.927535266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:32:27.929431 containerd[1460]: time="2024-12-13T01:32:27.929394933Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:27.932883 containerd[1460]: time="2024-12-13T01:32:27.932800921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:27.935965 containerd[1460]: time="2024-12-13T01:32:27.934036978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.443982054s" Dec 13 01:32:27.935965 containerd[1460]: time="2024-12-13T01:32:27.934084834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:32:27.936669 containerd[1460]: time="2024-12-13T01:32:27.936624520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:32:27.939715 containerd[1460]: time="2024-12-13T01:32:27.939619470Z" level=info msg="CreateContainer within sandbox \"757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:32:27.957989 containerd[1460]: time="2024-12-13T01:32:27.957781609Z" level=info msg="CreateContainer within sandbox \"757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f5a4cec016eb4eeeccd82d71d60be9f02f21ad3398e28e0ed024fa4afaaf8df7\"" Dec 13 01:32:27.960433 containerd[1460]: time="2024-12-13T01:32:27.959950980Z" level=info msg="StartContainer for \"f5a4cec016eb4eeeccd82d71d60be9f02f21ad3398e28e0ed024fa4afaaf8df7\"" Dec 13 01:32:28.034294 systemd[1]: Started cri-containerd-f5a4cec016eb4eeeccd82d71d60be9f02f21ad3398e28e0ed024fa4afaaf8df7.scope - libcontainer container f5a4cec016eb4eeeccd82d71d60be9f02f21ad3398e28e0ed024fa4afaaf8df7. Dec 13 01:32:28.120270 containerd[1460]: time="2024-12-13T01:32:28.120145497Z" level=info msg="StartContainer for \"f5a4cec016eb4eeeccd82d71d60be9f02f21ad3398e28e0ed024fa4afaaf8df7\" returns successfully" Dec 13 01:32:28.227307 ntpd[1429]: Listen normally on 8 vxlan.calico 192.168.93.128:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 8 vxlan.calico 192.168.93.128:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 9 vxlan.calico [fe80::64ce:e0ff:fe6e:c634%4]:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 10 cali85c9a628f0f [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 11 cali5c82ea3ffbf [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 12 calib9e145472ec [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 13 calia2fe581d983 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 14 calieab9ade064e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:32:28.228474 ntpd[1429]: 13 Dec 01:32:28 ntpd[1429]: Listen normally on 15 cali464edb5108a [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:32:28.227420 ntpd[1429]: Listen normally on 9 vxlan.calico [fe80::64ce:e0ff:fe6e:c634%4]:123 Dec 13 01:32:28.227492 ntpd[1429]: Listen normally on 10 cali85c9a628f0f [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:32:28.227544 ntpd[1429]: Listen normally on 11 cali5c82ea3ffbf [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:32:28.227595 ntpd[1429]: Listen normally on 12 calib9e145472ec [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:32:28.227689 ntpd[1429]: Listen normally on 13 calia2fe581d983 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:32:28.227757 ntpd[1429]: Listen normally on 14 calieab9ade064e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:32:28.227811 ntpd[1429]: Listen normally on 15 cali464edb5108a [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:32:29.281786 containerd[1460]: time="2024-12-13T01:32:29.281673465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:29.283204 containerd[1460]: time="2024-12-13T01:32:29.283122224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:32:29.284596 containerd[1460]: time="2024-12-13T01:32:29.284518565Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:29.287650 containerd[1460]: time="2024-12-13T01:32:29.287574041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:29.289423 containerd[1460]: time="2024-12-13T01:32:29.288523301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.351842953s" Dec 13 01:32:29.289423 containerd[1460]: time="2024-12-13T01:32:29.288574209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:32:29.290216 containerd[1460]: time="2024-12-13T01:32:29.289979600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:32:29.291707 containerd[1460]: time="2024-12-13T01:32:29.291670044Z" level=info msg="CreateContainer within sandbox \"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:32:29.313351 containerd[1460]: time="2024-12-13T01:32:29.313279350Z" level=info msg="CreateContainer within sandbox \"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"eb448267cd56a8ceaec96e2ef4fb96c20a7414b08e9de07e7325d41ad0201312\"" Dec 13 01:32:29.314847 containerd[1460]: time="2024-12-13T01:32:29.314751046Z" level=info msg="StartContainer for \"eb448267cd56a8ceaec96e2ef4fb96c20a7414b08e9de07e7325d41ad0201312\"" Dec 13 01:32:29.410790 systemd[1]: run-containerd-runc-k8s.io-eb448267cd56a8ceaec96e2ef4fb96c20a7414b08e9de07e7325d41ad0201312-runc.pRmoYU.mount: Deactivated successfully. Dec 13 01:32:29.422201 systemd[1]: Started cri-containerd-eb448267cd56a8ceaec96e2ef4fb96c20a7414b08e9de07e7325d41ad0201312.scope - libcontainer container eb448267cd56a8ceaec96e2ef4fb96c20a7414b08e9de07e7325d41ad0201312. Dec 13 01:32:29.482475 containerd[1460]: time="2024-12-13T01:32:29.482417926Z" level=info msg="StartContainer for \"eb448267cd56a8ceaec96e2ef4fb96c20a7414b08e9de07e7325d41ad0201312\" returns successfully" Dec 13 01:32:29.510024 containerd[1460]: time="2024-12-13T01:32:29.509967794Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:29.511354 containerd[1460]: time="2024-12-13T01:32:29.511278993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:32:29.514554 containerd[1460]: time="2024-12-13T01:32:29.514509141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 224.486368ms" Dec 13 01:32:29.514554 containerd[1460]: time="2024-12-13T01:32:29.514555946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:32:29.517521 containerd[1460]: time="2024-12-13T01:32:29.517353038Z" level=info msg="CreateContainer within sandbox \"62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:32:29.538449 containerd[1460]: time="2024-12-13T01:32:29.537762447Z" level=info msg="CreateContainer within sandbox \"62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3780b6ccecaf672ebf3bb7a3c2985f851e4476ab774a043678f8e710e2bcb836\"" Dec 13 01:32:29.540696 containerd[1460]: time="2024-12-13T01:32:29.539341513Z" level=info msg="StartContainer for \"3780b6ccecaf672ebf3bb7a3c2985f851e4476ab774a043678f8e710e2bcb836\"" Dec 13 01:32:29.595579 systemd[1]: Started cri-containerd-3780b6ccecaf672ebf3bb7a3c2985f851e4476ab774a043678f8e710e2bcb836.scope - libcontainer container 3780b6ccecaf672ebf3bb7a3c2985f851e4476ab774a043678f8e710e2bcb836. Dec 13 01:32:29.695144 containerd[1460]: time="2024-12-13T01:32:29.695058751Z" level=info msg="StartContainer for \"3780b6ccecaf672ebf3bb7a3c2985f851e4476ab774a043678f8e710e2bcb836\" returns successfully" Dec 13 01:32:29.730387 kubelet[2608]: I1213 01:32:29.730344 2608 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:32:29.731073 kubelet[2608]: I1213 01:32:29.730403 2608 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:32:29.912233 kubelet[2608]: I1213 01:32:29.912082 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:32:29.932547 kubelet[2608]: I1213 01:32:29.931712 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74c8c6c788-m2ckw" podStartSLOduration=29.54220774 podStartE2EDuration="34.931582487s" podCreationTimestamp="2024-12-13 01:31:55 +0000 UTC" firstStartedPulling="2024-12-13 01:32:22.545996948 +0000 UTC m=+48.186844034" lastFinishedPulling="2024-12-13 01:32:27.9353717 +0000 UTC m=+53.576218781" observedRunningTime="2024-12-13 01:32:28.926715978 +0000 UTC m=+54.567563070" watchObservedRunningTime="2024-12-13 01:32:29.931582487 +0000 UTC m=+55.572429587" Dec 13 01:32:29.966107 kubelet[2608]: I1213 01:32:29.966058 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74c8c6c788-jkfpz" podStartSLOduration=28.579375802 podStartE2EDuration="34.966001604s" podCreationTimestamp="2024-12-13 01:31:55 +0000 UTC" firstStartedPulling="2024-12-13 01:32:23.128420507 +0000 UTC m=+48.769267582" lastFinishedPulling="2024-12-13 01:32:29.515046295 +0000 UTC m=+55.155893384" observedRunningTime="2024-12-13 01:32:29.935384202 +0000 UTC m=+55.576231302" watchObservedRunningTime="2024-12-13 01:32:29.966001604 +0000 UTC m=+55.606848702" Dec 13 01:32:30.552813 systemd[1]: Started sshd@8-10.128.0.13:22-147.75.109.163:47726.service - OpenSSH per-connection server daemon (147.75.109.163:47726). Dec 13 01:32:30.868023 sshd[4929]: Accepted publickey for core from 147.75.109.163 port 47726 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:30.869350 sshd[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:30.877757 systemd-logind[1449]: New session 9 of user core. Dec 13 01:32:30.883177 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:32:31.203108 sshd[4929]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:31.208438 systemd[1]: sshd@8-10.128.0.13:22-147.75.109.163:47726.service: Deactivated successfully. Dec 13 01:32:31.211567 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:32:31.214146 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:32:31.215699 systemd-logind[1449]: Removed session 9. Dec 13 01:32:31.355693 kubelet[2608]: I1213 01:32:31.355632 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jkhgp" podStartSLOduration=28.318248645 podStartE2EDuration="36.355570335s" podCreationTimestamp="2024-12-13 01:31:55 +0000 UTC" firstStartedPulling="2024-12-13 01:32:21.251825513 +0000 UTC m=+46.892672601" lastFinishedPulling="2024-12-13 01:32:29.289147198 +0000 UTC m=+54.929994291" observedRunningTime="2024-12-13 01:32:29.968067296 +0000 UTC m=+55.608914394" watchObservedRunningTime="2024-12-13 01:32:31.355570335 +0000 UTC m=+56.996417435" Dec 13 01:32:34.543052 containerd[1460]: time="2024-12-13T01:32:34.542975461Z" level=info msg="StopPodSandbox for \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\"" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.599 [WARNING][4959] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d9ea8ed9-c62f-497b-8b5c-9f11233b2716", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4", Pod:"coredns-76f75df574-xzvvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali464edb5108a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.599 [INFO][4959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.600 [INFO][4959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" iface="eth0" netns="" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.600 [INFO][4959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.600 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.629 [INFO][4967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.630 [INFO][4967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.630 [INFO][4967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.637 [WARNING][4967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.637 [INFO][4967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.639 [INFO][4967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:34.641649 containerd[1460]: 2024-12-13 01:32:34.640 [INFO][4959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.643276 containerd[1460]: time="2024-12-13T01:32:34.641679099Z" level=info msg="TearDown network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\" successfully" Dec 13 01:32:34.643276 containerd[1460]: time="2024-12-13T01:32:34.641715612Z" level=info msg="StopPodSandbox for \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\" returns successfully" Dec 13 01:32:34.643276 containerd[1460]: time="2024-12-13T01:32:34.642599939Z" level=info msg="RemovePodSandbox for \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\"" Dec 13 01:32:34.643276 containerd[1460]: time="2024-12-13T01:32:34.642728738Z" level=info msg="Forcibly stopping sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\"" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.693 [WARNING][4985] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d9ea8ed9-c62f-497b-8b5c-9f11233b2716", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"914f9eafbc8f94712338d91333c5e8b2172c64eca5117a2e58e0933e002b7ce4", Pod:"coredns-76f75df574-xzvvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali464edb5108a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.693 [INFO][4985] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.693 [INFO][4985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" iface="eth0" netns="" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.693 [INFO][4985] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.694 [INFO][4985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.722 [INFO][4991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.722 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.722 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.729 [WARNING][4991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.729 [INFO][4991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" HandleID="k8s-pod-network.6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--xzvvx-eth0" Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.731 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:34.733629 containerd[1460]: 2024-12-13 01:32:34.732 [INFO][4985] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888" Dec 13 01:32:34.734873 containerd[1460]: time="2024-12-13T01:32:34.733684201Z" level=info msg="TearDown network for sandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\" successfully" Dec 13 01:32:34.739090 containerd[1460]: time="2024-12-13T01:32:34.739023667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:32:34.739249 containerd[1460]: time="2024-12-13T01:32:34.739122240Z" level=info msg="RemovePodSandbox \"6e237f445206345dc331e65f5be3b9d45bf99a555eb5a5986438c3c9f40a4888\" returns successfully" Dec 13 01:32:34.739984 containerd[1460]: time="2024-12-13T01:32:34.739921996Z" level=info msg="StopPodSandbox for \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\"" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.794 [WARNING][5009] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f72c213-293a-4c61-89bb-f506676840e6", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f", Pod:"csi-node-driver-jkhgp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c82ea3ffbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.794 [INFO][5009] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.794 [INFO][5009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" iface="eth0" netns="" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.794 [INFO][5009] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.794 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.821 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.821 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.821 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.830 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.830 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.831 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:34.834660 containerd[1460]: 2024-12-13 01:32:34.833 [INFO][5009] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.834660 containerd[1460]: time="2024-12-13T01:32:34.834607060Z" level=info msg="TearDown network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\" successfully" Dec 13 01:32:34.836499 containerd[1460]: time="2024-12-13T01:32:34.834979311Z" level=info msg="StopPodSandbox for \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\" returns successfully" Dec 13 01:32:34.837691 containerd[1460]: time="2024-12-13T01:32:34.835746686Z" level=info msg="RemovePodSandbox for \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\"" Dec 13 01:32:34.837691 containerd[1460]: time="2024-12-13T01:32:34.836953089Z" level=info msg="Forcibly stopping sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\"" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.901 [WARNING][5033] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f72c213-293a-4c61-89bb-f506676840e6", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"09adffb9b694698c66ccd594f5bb46c59410a219cb252d9c0a6b3035a910265f", Pod:"csi-node-driver-jkhgp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c82ea3ffbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.901 [INFO][5033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.901 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" iface="eth0" netns="" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.901 [INFO][5033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.901 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.952 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.954 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.954 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.981 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.981 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" HandleID="k8s-pod-network.eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-csi--node--driver--jkhgp-eth0" Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.989 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:34.993057 containerd[1460]: 2024-12-13 01:32:34.991 [INFO][5033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3" Dec 13 01:32:34.997110 containerd[1460]: time="2024-12-13T01:32:34.996084830Z" level=info msg="TearDown network for sandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\" successfully" Dec 13 01:32:35.001656 containerd[1460]: time="2024-12-13T01:32:35.001437016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:32:35.001656 containerd[1460]: time="2024-12-13T01:32:35.001548226Z" level=info msg="RemovePodSandbox \"eae924fa5d3d16a17e0c4d1ad3d8e55457755b0e41a5f7e543ab9229e5e9bca3\" returns successfully" Dec 13 01:32:35.002413 containerd[1460]: time="2024-12-13T01:32:35.002376744Z" level=info msg="StopPodSandbox for \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\"" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.109 [WARNING][5058] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a6f094a-3181-4104-9078-a9c6ee707b6a", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52", Pod:"calico-apiserver-74c8c6c788-m2ckw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2fe581d983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.109 [INFO][5058] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.110 [INFO][5058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" iface="eth0" netns="" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.110 [INFO][5058] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.110 [INFO][5058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.138 [INFO][5067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.138 [INFO][5067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.138 [INFO][5067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.147 [WARNING][5067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.147 [INFO][5067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.149 [INFO][5067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.152249 containerd[1460]: 2024-12-13 01:32:35.150 [INFO][5058] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.153439 containerd[1460]: time="2024-12-13T01:32:35.152319383Z" level=info msg="TearDown network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\" successfully" Dec 13 01:32:35.153439 containerd[1460]: time="2024-12-13T01:32:35.152354318Z" level=info msg="StopPodSandbox for \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\" returns successfully" Dec 13 01:32:35.153439 containerd[1460]: time="2024-12-13T01:32:35.152986592Z" level=info msg="RemovePodSandbox for \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\"" Dec 13 01:32:35.153439 containerd[1460]: time="2024-12-13T01:32:35.153028220Z" level=info msg="Forcibly stopping sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\"" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.206 [WARNING][5085] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a6f094a-3181-4104-9078-a9c6ee707b6a", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"757fa416e2c9f98c3e73351514d1639fc925d78cfa87ce5957fb296ab9b45a52", Pod:"calico-apiserver-74c8c6c788-m2ckw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2fe581d983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.207 [INFO][5085] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.207 [INFO][5085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" iface="eth0" netns="" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.207 [INFO][5085] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.207 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.234 [INFO][5091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.235 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.235 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.246 [WARNING][5091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.246 [INFO][5091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" HandleID="k8s-pod-network.826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--m2ckw-eth0" Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.248 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.252335 containerd[1460]: 2024-12-13 01:32:35.250 [INFO][5085] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352" Dec 13 01:32:35.253283 containerd[1460]: time="2024-12-13T01:32:35.252389366Z" level=info msg="TearDown network for sandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\" successfully" Dec 13 01:32:35.263500 containerd[1460]: time="2024-12-13T01:32:35.263367042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:32:35.263863 containerd[1460]: time="2024-12-13T01:32:35.263571643Z" level=info msg="RemovePodSandbox \"826f005615bb05362f1a23f4ed997df1712d60a7f6a834f782bd65a4059ef352\" returns successfully" Dec 13 01:32:35.264644 containerd[1460]: time="2024-12-13T01:32:35.264578320Z" level=info msg="StopPodSandbox for \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\"" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.323 [WARNING][5109] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ad72aa77-7913-4d0d-bc7f-8bd9b390797b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d", Pod:"coredns-76f75df574-wsrph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85c9a628f0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.323 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.323 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" iface="eth0" netns="" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.323 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.323 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.356 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.356 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.356 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.368 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.369 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.372 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.374993 containerd[1460]: 2024-12-13 01:32:35.373 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.376324 containerd[1460]: time="2024-12-13T01:32:35.375032068Z" level=info msg="TearDown network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\" successfully" Dec 13 01:32:35.376324 containerd[1460]: time="2024-12-13T01:32:35.375069543Z" level=info msg="StopPodSandbox for \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\" returns successfully" Dec 13 01:32:35.376324 containerd[1460]: time="2024-12-13T01:32:35.375910187Z" level=info msg="RemovePodSandbox for \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\"" Dec 13 01:32:35.376324 containerd[1460]: time="2024-12-13T01:32:35.375980255Z" level=info msg="Forcibly stopping sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\"" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.425 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ad72aa77-7913-4d0d-bc7f-8bd9b390797b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"d4dc7e827edf9ff66fad54f0f4e2c400542c9d6646e3ab36b72a8bff6bccd27d", Pod:"coredns-76f75df574-wsrph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85c9a628f0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.425 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.425 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" iface="eth0" netns="" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.425 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.425 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.463 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.463 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.463 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.473 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.473 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" HandleID="k8s-pod-network.2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-coredns--76f75df574--wsrph-eth0" Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.475 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.478401 containerd[1460]: 2024-12-13 01:32:35.476 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7" Dec 13 01:32:35.478401 containerd[1460]: time="2024-12-13T01:32:35.478354366Z" level=info msg="TearDown network for sandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\" successfully" Dec 13 01:32:35.484840 containerd[1460]: time="2024-12-13T01:32:35.484620809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:32:35.484840 containerd[1460]: time="2024-12-13T01:32:35.484716789Z" level=info msg="RemovePodSandbox \"2a4342e2c431f19ada055737789e5d8475f40672505975a2f67c9acc73cf01e7\" returns successfully" Dec 13 01:32:35.485523 containerd[1460]: time="2024-12-13T01:32:35.485469357Z" level=info msg="StopPodSandbox for \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\"" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.531 [WARNING][5158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0", GenerateName:"calico-kube-controllers-d99b9d6cd-", Namespace:"calico-system", SelfLink:"", UID:"7d47f189-a8fb-4943-9daa-99592014efac", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d99b9d6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68", Pod:"calico-kube-controllers-d99b9d6cd-2nmkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9e145472ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.532 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.532 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" iface="eth0" netns="" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.532 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.532 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.560 [INFO][5164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.560 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.560 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.568 [WARNING][5164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.568 [INFO][5164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.570 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.573579 containerd[1460]: 2024-12-13 01:32:35.572 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.575497 containerd[1460]: time="2024-12-13T01:32:35.573612871Z" level=info msg="TearDown network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\" successfully" Dec 13 01:32:35.575497 containerd[1460]: time="2024-12-13T01:32:35.573652815Z" level=info msg="StopPodSandbox for \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\" returns successfully" Dec 13 01:32:35.575497 containerd[1460]: time="2024-12-13T01:32:35.574558979Z" level=info msg="RemovePodSandbox for \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\"" Dec 13 01:32:35.575497 containerd[1460]: time="2024-12-13T01:32:35.574599869Z" level=info msg="Forcibly stopping sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\"" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.622 [WARNING][5182] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0", GenerateName:"calico-kube-controllers-d99b9d6cd-", Namespace:"calico-system", SelfLink:"", UID:"7d47f189-a8fb-4943-9daa-99592014efac", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d99b9d6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"e5f1b71ade4db1daa700742b2700bb440d939d50ffb226401df353f2a0fd6a68", Pod:"calico-kube-controllers-d99b9d6cd-2nmkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9e145472ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.622 [INFO][5182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.622 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" iface="eth0" netns="" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.622 [INFO][5182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.622 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.651 [INFO][5188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.651 [INFO][5188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.651 [INFO][5188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.662 [WARNING][5188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.663 [INFO][5188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" HandleID="k8s-pod-network.0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--kube--controllers--d99b9d6cd--2nmkc-eth0" Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.664 [INFO][5188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.667729 containerd[1460]: 2024-12-13 01:32:35.666 [INFO][5182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b" Dec 13 01:32:35.667729 containerd[1460]: time="2024-12-13T01:32:35.667619405Z" level=info msg="TearDown network for sandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\" successfully" Dec 13 01:32:35.672378 containerd[1460]: time="2024-12-13T01:32:35.672313302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:32:35.672556 containerd[1460]: time="2024-12-13T01:32:35.672408198Z" level=info msg="RemovePodSandbox \"0cdd8c96d9019260a2ccd574fbc27bd7a8b00522501b351988dbbf44e54b263b\" returns successfully" Dec 13 01:32:35.676953 containerd[1460]: time="2024-12-13T01:32:35.674861814Z" level=info msg="StopPodSandbox for \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\"" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.730 [WARNING][5207] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d5cda69-3ee1-4238-8b54-30e176b7b3d7", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518", Pod:"calico-apiserver-74c8c6c788-jkfpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieab9ade064e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.730 [INFO][5207] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.730 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" iface="eth0" netns="" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.731 [INFO][5207] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.731 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.761 [INFO][5213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.761 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.762 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.772 [WARNING][5213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.772 [INFO][5213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.775 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.779283 containerd[1460]: 2024-12-13 01:32:35.777 [INFO][5207] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.780139 containerd[1460]: time="2024-12-13T01:32:35.780014689Z" level=info msg="TearDown network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\" successfully" Dec 13 01:32:35.780139 containerd[1460]: time="2024-12-13T01:32:35.780053765Z" level=info msg="StopPodSandbox for \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\" returns successfully" Dec 13 01:32:35.781357 containerd[1460]: time="2024-12-13T01:32:35.781320438Z" level=info msg="RemovePodSandbox for \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\"" Dec 13 01:32:35.782100 containerd[1460]: time="2024-12-13T01:32:35.781611578Z" level=info msg="Forcibly stopping sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\"" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.831 [WARNING][5232] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0", GenerateName:"calico-apiserver-74c8c6c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d5cda69-3ee1-4238-8b54-30e176b7b3d7", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c8c6c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-bcebeea2c0e6b5fd2066.c.flatcar-212911.internal", ContainerID:"62ace614e6cf03ab406f5c59f051b4eb8e27ebb027bb0144f18d75d0038ce518", Pod:"calico-apiserver-74c8c6c788-jkfpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieab9ade064e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.831 [INFO][5232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.831 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" iface="eth0" netns="" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.831 [INFO][5232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.831 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.858 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.858 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.858 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.866 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.866 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" HandleID="k8s-pod-network.c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Workload="ci--4081--2--1--bcebeea2c0e6b5fd2066.c.flatcar--212911.internal-k8s-calico--apiserver--74c8c6c788--jkfpz-eth0" Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.868 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:35.871001 containerd[1460]: 2024-12-13 01:32:35.869 [INFO][5232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e" Dec 13 01:32:35.871890 containerd[1460]: time="2024-12-13T01:32:35.871057517Z" level=info msg="TearDown network for sandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\" successfully" Dec 13 01:32:35.876069 containerd[1460]: time="2024-12-13T01:32:35.875996861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:32:35.876298 containerd[1460]: time="2024-12-13T01:32:35.876092176Z" level=info msg="RemovePodSandbox \"c87e77f235124a8c20dc0d6af478174640622ad73cc2f562dfa592ea06b3fc3e\" returns successfully" Dec 13 01:32:36.259333 systemd[1]: Started sshd@9-10.128.0.13:22-147.75.109.163:41594.service - OpenSSH per-connection server daemon (147.75.109.163:41594). Dec 13 01:32:36.551405 sshd[5245]: Accepted publickey for core from 147.75.109.163 port 41594 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:36.553584 sshd[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:36.562604 systemd-logind[1449]: New session 10 of user core. Dec 13 01:32:36.568164 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:32:36.722919 systemd[1]: run-containerd-runc-k8s.io-9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c-runc.Q7QerS.mount: Deactivated successfully. Dec 13 01:32:36.866871 sshd[5245]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:36.873912 systemd[1]: sshd@9-10.128.0.13:22-147.75.109.163:41594.service: Deactivated successfully. Dec 13 01:32:36.877646 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:32:36.881681 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:32:36.885569 systemd-logind[1449]: Removed session 10. Dec 13 01:32:36.921382 systemd[1]: Started sshd@10-10.128.0.13:22-147.75.109.163:41604.service - OpenSSH per-connection server daemon (147.75.109.163:41604). Dec 13 01:32:37.207009 sshd[5279]: Accepted publickey for core from 147.75.109.163 port 41604 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:37.208577 sshd[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:37.215350 systemd-logind[1449]: New session 11 of user core. Dec 13 01:32:37.220161 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:32:37.548678 sshd[5279]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:37.553855 systemd[1]: sshd@10-10.128.0.13:22-147.75.109.163:41604.service: Deactivated successfully. Dec 13 01:32:37.558201 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:32:37.560704 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:32:37.562977 systemd-logind[1449]: Removed session 11. Dec 13 01:32:37.607327 systemd[1]: Started sshd@11-10.128.0.13:22-147.75.109.163:41612.service - OpenSSH per-connection server daemon (147.75.109.163:41612). Dec 13 01:32:37.901184 sshd[5296]: Accepted publickey for core from 147.75.109.163 port 41612 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:37.903252 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:37.909041 systemd-logind[1449]: New session 12 of user core. Dec 13 01:32:37.912197 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:32:38.198240 sshd[5296]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:38.203681 systemd[1]: sshd@11-10.128.0.13:22-147.75.109.163:41612.service: Deactivated successfully. Dec 13 01:32:38.207106 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:32:38.209910 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:32:38.212435 systemd-logind[1449]: Removed session 12. Dec 13 01:32:40.988806 kubelet[2608]: I1213 01:32:40.988433 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:32:43.250289 systemd[1]: Started sshd@12-10.128.0.13:22-147.75.109.163:41618.service - OpenSSH per-connection server daemon (147.75.109.163:41618). Dec 13 01:32:43.559121 sshd[5311]: Accepted publickey for core from 147.75.109.163 port 41618 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:43.561137 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:43.568256 systemd-logind[1449]: New session 13 of user core. Dec 13 01:32:43.575157 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:32:43.861859 sshd[5311]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:43.867563 systemd[1]: sshd@12-10.128.0.13:22-147.75.109.163:41618.service: Deactivated successfully. Dec 13 01:32:43.870664 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:32:43.871782 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:32:43.873320 systemd-logind[1449]: Removed session 13. Dec 13 01:32:44.617791 systemd[1]: run-containerd-runc-k8s.io-5ae71bd103587ae127e6d8b3fb30bf76090b66ac2bc15daa0b3ac204317fae98-runc.57jtF0.mount: Deactivated successfully. Dec 13 01:32:48.918861 systemd[1]: Started sshd@13-10.128.0.13:22-147.75.109.163:49542.service - OpenSSH per-connection server daemon (147.75.109.163:49542). Dec 13 01:32:49.224594 sshd[5354]: Accepted publickey for core from 147.75.109.163 port 49542 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:49.226690 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:49.233340 systemd-logind[1449]: New session 14 of user core. Dec 13 01:32:49.237143 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:32:49.521686 sshd[5354]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:49.526862 systemd[1]: sshd@13-10.128.0.13:22-147.75.109.163:49542.service: Deactivated successfully. Dec 13 01:32:49.529900 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:32:49.532401 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:32:49.534273 systemd-logind[1449]: Removed session 14. Dec 13 01:32:54.591077 systemd[1]: Started sshd@14-10.128.0.13:22-147.75.109.163:49558.service - OpenSSH per-connection server daemon (147.75.109.163:49558). Dec 13 01:32:54.883889 sshd[5367]: Accepted publickey for core from 147.75.109.163 port 49558 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:32:54.886407 sshd[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:54.892498 systemd-logind[1449]: New session 15 of user core. Dec 13 01:32:54.900162 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:32:55.176863 sshd[5367]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:55.188520 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:32:55.188958 systemd[1]: sshd@14-10.128.0.13:22-147.75.109.163:49558.service: Deactivated successfully. Dec 13 01:32:55.196589 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:32:55.205202 systemd-logind[1449]: Removed session 15. Dec 13 01:33:00.233417 systemd[1]: Started sshd@15-10.128.0.13:22-147.75.109.163:50388.service - OpenSSH per-connection server daemon (147.75.109.163:50388). Dec 13 01:33:00.527230 sshd[5386]: Accepted publickey for core from 147.75.109.163 port 50388 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:00.529449 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:00.535885 systemd-logind[1449]: New session 16 of user core. Dec 13 01:33:00.543230 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:33:00.819837 sshd[5386]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:00.826164 systemd[1]: sshd@15-10.128.0.13:22-147.75.109.163:50388.service: Deactivated successfully. Dec 13 01:33:00.828889 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:33:00.830050 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:33:00.833555 systemd-logind[1449]: Removed session 16. Dec 13 01:33:00.874429 systemd[1]: Started sshd@16-10.128.0.13:22-147.75.109.163:50400.service - OpenSSH per-connection server daemon (147.75.109.163:50400). Dec 13 01:33:01.166259 sshd[5399]: Accepted publickey for core from 147.75.109.163 port 50400 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:01.168129 sshd[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:01.176227 systemd-logind[1449]: New session 17 of user core. Dec 13 01:33:01.179219 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:33:01.529023 sshd[5399]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:01.534950 systemd[1]: sshd@16-10.128.0.13:22-147.75.109.163:50400.service: Deactivated successfully. Dec 13 01:33:01.538183 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:33:01.539641 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:33:01.541815 systemd-logind[1449]: Removed session 17. Dec 13 01:33:01.584819 systemd[1]: Started sshd@17-10.128.0.13:22-147.75.109.163:50416.service - OpenSSH per-connection server daemon (147.75.109.163:50416). Dec 13 01:33:01.876915 sshd[5410]: Accepted publickey for core from 147.75.109.163 port 50416 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:01.878826 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:01.885329 systemd-logind[1449]: New session 18 of user core. Dec 13 01:33:01.890151 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:33:03.986316 sshd[5410]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:03.993558 systemd[1]: sshd@17-10.128.0.13:22-147.75.109.163:50416.service: Deactivated successfully. Dec 13 01:33:03.996704 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:33:03.998381 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:33:04.000573 systemd-logind[1449]: Removed session 18. Dec 13 01:33:04.044372 systemd[1]: Started sshd@18-10.128.0.13:22-147.75.109.163:50422.service - OpenSSH per-connection server daemon (147.75.109.163:50422). Dec 13 01:33:04.328049 sshd[5428]: Accepted publickey for core from 147.75.109.163 port 50422 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:04.330798 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:04.338234 systemd-logind[1449]: New session 19 of user core. Dec 13 01:33:04.344269 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:33:04.750993 sshd[5428]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:04.756520 systemd[1]: sshd@18-10.128.0.13:22-147.75.109.163:50422.service: Deactivated successfully. Dec 13 01:33:04.759488 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:33:04.760709 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:33:04.762342 systemd-logind[1449]: Removed session 19. Dec 13 01:33:04.809357 systemd[1]: Started sshd@19-10.128.0.13:22-147.75.109.163:50432.service - OpenSSH per-connection server daemon (147.75.109.163:50432). Dec 13 01:33:05.105719 sshd[5441]: Accepted publickey for core from 147.75.109.163 port 50432 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:05.107448 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:05.113539 systemd-logind[1449]: New session 20 of user core. Dec 13 01:33:05.119211 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:33:05.401777 sshd[5441]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:05.407735 systemd[1]: sshd@19-10.128.0.13:22-147.75.109.163:50432.service: Deactivated successfully. Dec 13 01:33:05.410636 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:33:05.411776 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:33:05.413462 systemd-logind[1449]: Removed session 20. Dec 13 01:33:10.457368 systemd[1]: Started sshd@20-10.128.0.13:22-147.75.109.163:37340.service - OpenSSH per-connection server daemon (147.75.109.163:37340). Dec 13 01:33:10.740063 sshd[5472]: Accepted publickey for core from 147.75.109.163 port 37340 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:10.742027 sshd[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:10.749042 systemd-logind[1449]: New session 21 of user core. Dec 13 01:33:10.752287 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:33:11.029977 sshd[5472]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:11.037651 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:33:11.041497 systemd[1]: sshd@20-10.128.0.13:22-147.75.109.163:37340.service: Deactivated successfully. Dec 13 01:33:11.046791 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:33:11.048584 systemd-logind[1449]: Removed session 21. Dec 13 01:33:15.485665 systemd[1]: run-containerd-runc-k8s.io-9ad25d084f58d295e239079f0313509aee7ccc1f6d5ef191c0610108e152149c-runc.fe4ucr.mount: Deactivated successfully. Dec 13 01:33:16.086460 systemd[1]: Started sshd@21-10.128.0.13:22-147.75.109.163:51052.service - OpenSSH per-connection server daemon (147.75.109.163:51052). Dec 13 01:33:16.389066 sshd[5529]: Accepted publickey for core from 147.75.109.163 port 51052 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:16.391594 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:16.402269 systemd-logind[1449]: New session 22 of user core. Dec 13 01:33:16.408582 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:33:16.720752 sshd[5529]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:16.729361 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:33:16.730076 systemd[1]: sshd@21-10.128.0.13:22-147.75.109.163:51052.service: Deactivated successfully. Dec 13 01:33:16.734971 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:33:16.739148 systemd-logind[1449]: Removed session 22. Dec 13 01:33:21.778352 systemd[1]: Started sshd@22-10.128.0.13:22-147.75.109.163:51068.service - OpenSSH per-connection server daemon (147.75.109.163:51068). Dec 13 01:33:22.066829 sshd[5546]: Accepted publickey for core from 147.75.109.163 port 51068 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:33:22.068608 sshd[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:22.075203 systemd-logind[1449]: New session 23 of user core. Dec 13 01:33:22.081192 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:33:22.404479 sshd[5546]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:22.409093 systemd[1]: sshd@22-10.128.0.13:22-147.75.109.163:51068.service: Deactivated successfully. Dec 13 01:33:22.411826 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:33:22.414473 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:33:22.416157 systemd-logind[1449]: Removed session 23.