Dec 13 01:26:36.098472 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:26:36.098522 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:36.098541 kernel: BIOS-provided physical RAM map: Dec 13 01:26:36.098555 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:26:36.098568 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:26:36.098581 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:26:36.098598 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:26:36.098616 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:26:36.098630 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:26:36.098645 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:26:36.098660 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:26:36.098675 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:26:36.098689 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:26:36.098704 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:26:36.098725 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:26:36.098741 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:26:36.098757 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:26:36.098773 kernel: NX (Execute Disable) protection: active Dec 13 01:26:36.098789 kernel: APIC: Static calls initialized Dec 13 01:26:36.098805 kernel: efi: EFI v2.7 by EDK II Dec 13 01:26:36.098821 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:26:36.098837 kernel: SMBIOS 2.4 present. Dec 13 01:26:36.098854 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:26:36.098880 kernel: Hypervisor detected: KVM Dec 13 01:26:36.098900 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:26:36.098917 kernel: kvm-clock: using sched offset of 11830288101 cycles Dec 13 01:26:36.098934 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:26:36.098951 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:26:36.098968 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:26:36.098985 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:26:36.099001 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:26:36.099017 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:26:36.099078 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:26:36.099100 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:26:36.099116 kernel: Using GB pages for direct mapping Dec 13 01:26:36.099132 kernel: Secure boot disabled Dec 13 01:26:36.099149 kernel: ACPI: Early table checksum verification disabled Dec 13 01:26:36.099166 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:26:36.099183 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:26:36.099201 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:26:36.099224 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:26:36.099245 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:26:36.099263 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:26:36.099280 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:26:36.099297 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:26:36.099315 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:26:36.099332 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:26:36.099353 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:26:36.099371 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:26:36.099388 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:26:36.099405 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:26:36.099423 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:26:36.099440 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:26:36.099456 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:26:36.099474 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:26:36.099491 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:26:36.099512 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:26:36.099529 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:26:36.099547 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:26:36.099564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:26:36.099581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:26:36.099598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:26:36.099616 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:26:36.099634 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:26:36.099651 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Dec 13 01:26:36.099673 kernel: Zone ranges: Dec 13 01:26:36.099690 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:26:36.099707 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:26:36.099723 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:26:36.099740 kernel: Movable zone start for each node Dec 13 01:26:36.099757 kernel: Early memory node ranges Dec 13 01:26:36.099775 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:26:36.099792 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:26:36.099809 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:26:36.099830 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:26:36.099848 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:26:36.099873 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:26:36.099890 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:26:36.099908 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:26:36.099924 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:26:36.099939 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:26:36.099955 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:26:36.099971 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:26:36.099992 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:26:36.100009 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:26:36.100026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:26:36.100067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:26:36.100085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:26:36.100101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:26:36.100118 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:26:36.100134 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:26:36.100152 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:26:36.100175 kernel: Booting paravirtualized kernel on KVM Dec 13 01:26:36.100193 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:26:36.100211 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:26:36.100229 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:26:36.100247 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:26:36.100266 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:26:36.100283 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:26:36.100301 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:26:36.100321 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:36.100344 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:26:36.100362 kernel: random: crng init done Dec 13 01:26:36.100379 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:26:36.100397 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:26:36.100415 kernel: Fallback order for Node 0: 0 Dec 13 01:26:36.100433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:26:36.100451 kernel: Policy zone: Normal Dec 13 01:26:36.100468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:26:36.100490 kernel: software IO TLB: area num 2. Dec 13 01:26:36.100508 kernel: Memory: 7513376K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346948K reserved, 0K cma-reserved) Dec 13 01:26:36.100526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:26:36.100544 kernel: Kernel/User page tables isolation: enabled Dec 13 01:26:36.100562 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:26:36.100580 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:26:36.100598 kernel: Dynamic Preempt: voluntary Dec 13 01:26:36.100615 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:26:36.100640 kernel: rcu: RCU event tracing is enabled. Dec 13 01:26:36.100675 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:26:36.100695 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:26:36.100715 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:26:36.100738 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:26:36.100757 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:26:36.100776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:26:36.100796 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:26:36.100815 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:26:36.100835 kernel: Console: colour dummy device 80x25 Dec 13 01:26:36.100857 kernel: printk: console [ttyS0] enabled Dec 13 01:26:36.100885 kernel: ACPI: Core revision 20230628 Dec 13 01:26:36.100904 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:26:36.100922 kernel: x2apic enabled Dec 13 01:26:36.100942 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:26:36.100961 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:26:36.100981 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:26:36.101000 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:26:36.101024 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:26:36.101058 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:26:36.101078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:26:36.101097 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:26:36.101116 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:26:36.101135 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:26:36.101155 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:26:36.101174 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:26:36.101192 kernel: RETBleed: Mitigation: IBRS Dec 13 01:26:36.101216 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:26:36.101235 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:26:36.101254 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:26:36.101273 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:26:36.101293 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:26:36.101312 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:26:36.101331 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:26:36.101349 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:26:36.101367 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:26:36.101391 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:26:36.101411 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:26:36.101429 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:26:36.101447 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:26:36.101466 kernel: landlock: Up and running. Dec 13 01:26:36.101484 kernel: SELinux: Initializing. Dec 13 01:26:36.101503 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.101521 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.101540 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:26:36.101562 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:36.101580 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:36.101599 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:36.101619 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:26:36.101638 kernel: signal: max sigframe size: 1776 Dec 13 01:26:36.101657 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:26:36.101677 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:26:36.101696 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:26:36.101719 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:26:36.101736 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:26:36.101755 kernel: .... node #0, CPUs: #1 Dec 13 01:26:36.101775 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:26:36.101796 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:26:36.101815 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:26:36.101835 kernel: smpboot: Max logical packages: 1 Dec 13 01:26:36.101855 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:26:36.101882 kernel: devtmpfs: initialized Dec 13 01:26:36.101905 kernel: x86/mm: Memory block size: 128MB Dec 13 01:26:36.101925 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:26:36.101944 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:26:36.101964 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:26:36.101983 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:26:36.102002 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:26:36.102021 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:26:36.102062 kernel: audit: type=2000 audit(1734053195.059:1): state=initialized audit_enabled=0 res=1 Dec 13 01:26:36.102079 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:26:36.102101 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:26:36.102117 kernel: cpuidle: using governor menu Dec 13 01:26:36.102134 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:26:36.102153 kernel: dca service started, version 1.12.1 Dec 13 01:26:36.102171 kernel: PCI: Using configuration type 1 for base access Dec 13 01:26:36.102190 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:26:36.102209 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:26:36.102226 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:26:36.102245 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:26:36.102266 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:26:36.102283 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:26:36.102301 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:26:36.102318 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:26:36.102335 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:26:36.102354 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:26:36.102370 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:26:36.102389 kernel: ACPI: Interpreter enabled Dec 13 01:26:36.102407 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:26:36.102430 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:26:36.102448 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:26:36.102467 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:26:36.102485 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:26:36.102504 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:26:36.102753 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:26:36.102965 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:26:36.103176 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:26:36.103201 kernel: PCI host bridge to bus 0000:00 Dec 13 01:26:36.103383 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:26:36.103555 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:26:36.103719 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:26:36.103891 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:26:36.104086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:26:36.104298 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:26:36.104495 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:26:36.104697 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:26:36.104893 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:26:36.105107 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:26:36.105300 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:26:36.105527 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:26:36.105721 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:26:36.105933 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:26:36.106132 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:26:36.106319 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:26:36.106500 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:26:36.106679 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:26:36.106708 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:26:36.106737 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:26:36.106757 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:26:36.106776 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:26:36.106796 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:26:36.106816 kernel: iommu: Default domain type: Translated Dec 13 01:26:36.106836 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:26:36.106855 kernel: efivars: Registered efivars operations Dec 13 01:26:36.106901 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:26:36.106927 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:26:36.106946 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:26:36.106965 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:26:36.106984 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:26:36.107002 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:26:36.107022 kernel: vgaarb: loaded Dec 13 01:26:36.107095 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:26:36.107114 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:26:36.107133 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:26:36.107155 kernel: pnp: PnP ACPI init Dec 13 01:26:36.107172 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:26:36.107192 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:26:36.107209 kernel: NET: Registered PF_INET protocol family Dec 13 01:26:36.107228 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:26:36.107248 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:26:36.107266 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:26:36.107286 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:26:36.107305 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:26:36.107330 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:26:36.107350 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.107369 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.107389 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:26:36.107409 kernel: NET: Registered PF_XDP protocol family Dec 13 01:26:36.107599 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:26:36.107768 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:26:36.107945 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:26:36.108143 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:26:36.108335 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:26:36.108362 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:26:36.108383 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:26:36.108403 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:26:36.108422 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:26:36.108441 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:26:36.108460 kernel: clocksource: Switched to clocksource tsc Dec 13 01:26:36.108485 kernel: Initialise system trusted keyrings Dec 13 01:26:36.108504 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:26:36.108521 kernel: Key type asymmetric registered Dec 13 01:26:36.108538 kernel: Asymmetric key parser 'x509' registered Dec 13 01:26:36.108556 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:26:36.108574 kernel: io scheduler mq-deadline registered Dec 13 01:26:36.108590 kernel: io scheduler kyber registered Dec 13 01:26:36.108613 kernel: io scheduler bfq registered Dec 13 01:26:36.108634 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:26:36.108663 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:26:36.108873 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:26:36.108898 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:26:36.109143 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:26:36.109168 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:26:36.109349 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:26:36.109372 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:26:36.109390 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:26:36.109409 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:26:36.109434 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:26:36.109453 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:26:36.109635 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:26:36.109660 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:26:36.109678 kernel: i8042: Warning: Keylock active Dec 13 01:26:36.109696 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:26:36.109715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:26:36.109898 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:26:36.110098 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:26:36.110267 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:26:35 UTC (1734053195) Dec 13 01:26:36.110432 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:26:36.110455 kernel: intel_pstate: CPU model not supported Dec 13 01:26:36.110473 kernel: pstore: Using crash dump compression: deflate Dec 13 01:26:36.110492 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:26:36.110511 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:26:36.110535 kernel: Segment Routing with IPv6 Dec 13 01:26:36.110553 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:26:36.110572 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:26:36.110590 kernel: Key type dns_resolver registered Dec 13 01:26:36.110609 kernel: IPI shorthand broadcast: enabled Dec 13 01:26:36.110627 kernel: sched_clock: Marking stable (853003886, 148926582)->(1028101423, -26170955) Dec 13 01:26:36.110646 kernel: registered taskstats version 1 Dec 13 01:26:36.110665 kernel: Loading compiled-in X.509 certificates Dec 13 01:26:36.110683 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:26:36.110702 kernel: Key type .fscrypt registered Dec 13 01:26:36.110723 kernel: Key type fscrypt-provisioning registered Dec 13 01:26:36.110742 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:26:36.110761 kernel: ima: No architecture policies found Dec 13 01:26:36.110779 kernel: clk: Disabling unused clocks Dec 13 01:26:36.110797 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:26:36.110816 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:26:36.110834 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:26:36.110853 kernel: Run /init as init process Dec 13 01:26:36.110884 kernel: with arguments: Dec 13 01:26:36.110902 kernel: /init Dec 13 01:26:36.110920 kernel: with environment: Dec 13 01:26:36.110937 kernel: HOME=/ Dec 13 01:26:36.110955 kernel: TERM=linux Dec 13 01:26:36.110973 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:26:36.110992 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:26:36.111014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:36.111060 systemd[1]: Detected virtualization google. Dec 13 01:26:36.111080 systemd[1]: Detected architecture x86-64. Dec 13 01:26:36.111098 systemd[1]: Running in initrd. Dec 13 01:26:36.111117 systemd[1]: No hostname configured, using default hostname. Dec 13 01:26:36.111136 systemd[1]: Hostname set to . Dec 13 01:26:36.111157 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:36.111176 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:26:36.111195 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:36.111219 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:36.111239 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:26:36.111259 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:36.111278 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:26:36.111298 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:26:36.111320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:26:36.111343 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:26:36.111363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:36.111383 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:36.111422 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:36.111446 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:36.111466 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:36.111486 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:36.111510 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:36.111530 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:36.111551 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:36.111571 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:36.111591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:36.111611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:36.111631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:36.111651 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:36.111675 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:26:36.111695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:36.111715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:26:36.111735 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:26:36.111754 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:36.111775 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:36.111795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:36.111845 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:26:36.111899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:36.111919 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:36.111939 systemd-journald[183]: Journal started Dec 13 01:26:36.111983 systemd-journald[183]: Runtime Journal (/run/log/journal/5b2310dd10114d79aa23a7ced83e8694) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:26:36.119054 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:36.118588 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:26:36.120575 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:26:36.138254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:36.142469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:36.152826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:36.168052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:26:36.170321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:36.175973 kernel: Bridge firewalling registered Dec 13 01:26:36.172088 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:26:36.174673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:36.191242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:36.191655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:36.191984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:36.201112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:36.214237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:36.222966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:36.229091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:36.239847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:36.254330 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:26:36.277543 dracut-cmdline[218]: dracut-dracut-053 Dec 13 01:26:36.281134 systemd-resolved[214]: Positive Trust Anchors: Dec 13 01:26:36.281147 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:36.290164 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:36.281207 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:36.286767 systemd-resolved[214]: Defaulting to hostname 'linux'. Dec 13 01:26:36.288491 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:36.301365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:36.382071 kernel: SCSI subsystem initialized Dec 13 01:26:36.393072 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:26:36.405069 kernel: iscsi: registered transport (tcp) Dec 13 01:26:36.428088 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:26:36.428171 kernel: QLogic iSCSI HBA Driver Dec 13 01:26:36.479021 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:36.489216 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:26:36.517668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:26:36.517753 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:26:36.518951 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:26:36.562070 kernel: raid6: avx2x4 gen() 18011 MB/s Dec 13 01:26:36.579059 kernel: raid6: avx2x2 gen() 18089 MB/s Dec 13 01:26:36.596612 kernel: raid6: avx2x1 gen() 14163 MB/s Dec 13 01:26:36.596651 kernel: raid6: using algorithm avx2x2 gen() 18089 MB/s Dec 13 01:26:36.614504 kernel: raid6: .... xor() 17430 MB/s, rmw enabled Dec 13 01:26:36.614567 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:26:36.637073 kernel: xor: automatically using best checksumming function avx Dec 13 01:26:36.811070 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:26:36.824658 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:36.831256 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:36.861964 systemd-udevd[400]: Using default interface naming scheme 'v255'. Dec 13 01:26:36.868842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:36.880222 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:26:36.909022 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Dec 13 01:26:36.945839 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:36.961306 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:37.042240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:37.053310 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:26:37.094013 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:37.103589 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:37.112150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:37.117504 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:37.132276 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:26:37.170768 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:37.178176 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:26:37.199759 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:26:37.199831 kernel: AES CTR mode by8 optimization enabled Dec 13 01:26:37.216452 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:37.219873 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:37.232443 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:26:37.232897 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:37.236518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:37.237369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:37.285321 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:37.290182 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:26:37.305640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:37.335497 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:26:37.350280 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:26:37.350596 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:26:37.350828 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:26:37.351076 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:26:37.351302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:26:37.351331 kernel: GPT:17805311 != 25165823 Dec 13 01:26:37.351355 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:26:37.351379 kernel: GPT:17805311 != 25165823 Dec 13 01:26:37.351401 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:26:37.351425 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:37.351463 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:26:37.340250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:37.356676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:37.415590 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Dec 13 01:26:37.415665 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (463) Dec 13 01:26:37.415563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:37.439955 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:26:37.447332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:26:37.454489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:26:37.460643 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:26:37.460900 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:26:37.476237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:26:37.499702 disk-uuid[550]: Primary Header is updated. Dec 13 01:26:37.499702 disk-uuid[550]: Secondary Entries is updated. Dec 13 01:26:37.499702 disk-uuid[550]: Secondary Header is updated. Dec 13 01:26:37.514179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:37.535064 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:37.557056 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:38.548076 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:38.548662 disk-uuid[551]: The operation has completed successfully. Dec 13 01:26:38.619813 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:26:38.619961 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:26:38.650253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:26:38.680470 sh[568]: Success Dec 13 01:26:38.704154 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:26:38.792647 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:26:38.799970 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:26:38.818684 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:26:38.866380 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:26:38.866467 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:38.866493 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:26:38.875809 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:26:38.888361 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:26:38.922066 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:26:38.927355 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:26:38.928301 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:26:38.934248 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:26:38.997503 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:38.997546 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:38.997568 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:38.955581 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:26:39.022211 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:26:39.022256 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:39.042138 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:39.059691 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:26:39.088304 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:26:39.125761 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:39.132311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:39.217843 systemd-networkd[750]: lo: Link UP Dec 13 01:26:39.218258 systemd-networkd[750]: lo: Gained carrier Dec 13 01:26:39.220653 systemd-networkd[750]: Enumeration completed Dec 13 01:26:39.221228 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:39.221234 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:39.222966 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:39.224604 systemd-networkd[750]: eth0: Link UP Dec 13 01:26:39.295107 ignition[707]: Ignition 2.19.0 Dec 13 01:26:39.224611 systemd-networkd[750]: eth0: Gained carrier Dec 13 01:26:39.295119 ignition[707]: Stage: fetch-offline Dec 13 01:26:39.224625 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:39.295175 ignition[707]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.250137 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.34/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:26:39.295186 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.292379 systemd[1]: Reached target network.target - Network. Dec 13 01:26:39.295351 ignition[707]: parsed url from cmdline: "" Dec 13 01:26:39.301815 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:39.295357 ignition[707]: no config URL provided Dec 13 01:26:39.332413 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:26:39.295364 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:39.421880 unknown[761]: fetched base config from "system" Dec 13 01:26:39.295375 ignition[707]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:39.421919 unknown[761]: fetched base config from "system" Dec 13 01:26:39.295384 ignition[707]: failed to fetch config: resource requires networking Dec 13 01:26:39.421931 unknown[761]: fetched user config from "gcp" Dec 13 01:26:39.296217 ignition[707]: Ignition finished successfully Dec 13 01:26:39.425964 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:26:39.400698 ignition[761]: Ignition 2.19.0 Dec 13 01:26:39.449494 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:26:39.400711 ignition[761]: Stage: fetch Dec 13 01:26:39.529472 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:26:39.400951 ignition[761]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.568399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:26:39.400965 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.635481 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:26:39.401125 ignition[761]: parsed url from cmdline: "" Dec 13 01:26:39.663593 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:39.401131 ignition[761]: no config URL provided Dec 13 01:26:39.703464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:39.401139 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:39.743463 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:39.401151 ignition[761]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:39.761797 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:39.401175 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:26:39.787556 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:39.408099 ignition[761]: GET result: OK Dec 13 01:26:39.823425 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:26:39.408228 ignition[761]: parsing config with SHA512: 454b14bf67d9fdedf446c2071c43c3378ea9f65cb4d5538cda7676a2def7993870a4c0ef0d0002bf82ad1bc038efcc772acd9e48ff71a0a0ba7888b6135c5fc3 Dec 13 01:26:39.423834 ignition[761]: fetch: fetch complete Dec 13 01:26:39.423847 ignition[761]: fetch: fetch passed Dec 13 01:26:39.423976 ignition[761]: Ignition finished successfully Dec 13 01:26:39.524439 ignition[768]: Ignition 2.19.0 Dec 13 01:26:39.524460 ignition[768]: Stage: kargs Dec 13 01:26:39.524758 ignition[768]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.524774 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.525872 ignition[768]: kargs: kargs passed Dec 13 01:26:39.525945 ignition[768]: Ignition finished successfully Dec 13 01:26:39.629710 ignition[775]: Ignition 2.19.0 Dec 13 01:26:39.629721 ignition[775]: Stage: disks Dec 13 01:26:39.629987 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.630000 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.634110 ignition[775]: disks: disks passed Dec 13 01:26:39.634232 ignition[775]: Ignition finished successfully Dec 13 01:26:39.933665 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:26:40.114696 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:26:40.138209 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:26:40.353100 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:26:40.353964 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:26:40.355050 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:40.431351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:40.460223 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:26:40.472767 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:26:40.472839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:26:40.472883 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:40.638307 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (791) Dec 13 01:26:40.638368 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:40.638393 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:40.638520 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:40.638595 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:26:40.638621 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:40.485237 systemd-networkd[750]: eth0: Gained IPv6LL Dec 13 01:26:40.630559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:40.648585 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:26:40.673528 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:26:40.903693 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:40.915214 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:40.929675 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:40.946259 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:41.239992 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:41.268234 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:41.298545 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:41.326369 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:41.338681 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:41.392901 ignition[905]: INFO : Ignition 2.19.0 Dec 13 01:26:41.392901 ignition[905]: INFO : Stage: mount Dec 13 01:26:41.392901 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:41.392901 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:41.461272 ignition[905]: INFO : mount: mount passed Dec 13 01:26:41.461272 ignition[905]: INFO : Ignition finished successfully Dec 13 01:26:41.395955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:41.438414 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:41.469317 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:41.527336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:41.614122 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Dec 13 01:26:41.640059 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:41.640186 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:41.640213 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:41.675940 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:26:41.676080 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:41.680272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:41.739178 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:26:41.739178 ignition[934]: INFO : Stage: files Dec 13 01:26:41.758349 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:41.758349 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:41.758349 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:41.758349 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:41.758349 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:41.851778 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:26:41.763458 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 01:26:41.995256 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:26:42.074595 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:42.074595 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:26:42.415369 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:26:42.917979 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.917979 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:42.936491 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:42.936491 ignition[934]: INFO : files: files passed Dec 13 01:26:42.936491 ignition[934]: INFO : Ignition finished successfully Dec 13 01:26:42.922941 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:42.962407 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:43.027303 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:43.033892 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:43.227254 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:43.227254 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:43.034020 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:43.276510 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:43.108918 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:43.145024 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:43.182346 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:43.271833 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:43.271972 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:43.287495 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:43.319493 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:43.350471 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:43.357547 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:43.480661 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:43.512337 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:43.564652 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:43.591495 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:43.592121 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:43.627762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:43.628126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:43.682323 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:43.682760 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:43.700689 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:43.719808 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:43.771436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:43.772110 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:43.790664 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:43.825661 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:43.837837 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:43.858891 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:43.878723 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:43.878980 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:43.917849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:43.932905 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:43.974686 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:43.974968 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:44.008437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:44.008686 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:44.059726 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:44.060018 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:44.085748 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:44.086070 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:44.105434 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:44.140595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:44.207331 ignition[986]: INFO : Ignition 2.19.0 Dec 13 01:26:44.207331 ignition[986]: INFO : Stage: umount Dec 13 01:26:44.207331 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:44.207331 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:44.207331 ignition[986]: INFO : umount: umount passed Dec 13 01:26:44.207331 ignition[986]: INFO : Ignition finished successfully Dec 13 01:26:44.141191 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:44.176976 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:44.215496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:44.215759 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:44.230862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:44.231150 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:44.286894 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:44.287931 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:44.288076 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:44.290056 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:44.290195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:44.330444 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:44.330603 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:44.372222 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:44.372392 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:44.391419 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:44.391503 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:44.412345 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:26:44.412455 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:26:44.437382 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:44.455260 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:44.455492 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:44.477352 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:44.492247 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:44.494181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:44.511223 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:44.526244 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:44.541415 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:44.541492 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:44.549498 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:44.549568 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:44.584401 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:44.584501 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:44.609423 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:44.609507 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:44.617430 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:44.617506 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:44.634703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:44.640104 systemd-networkd[750]: eth0: DHCPv6 lease lost Dec 13 01:26:44.651526 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:44.684921 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:44.685102 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:44.704948 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:44.705434 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:44.732013 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:44.732105 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:44.777295 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:44.800255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:44.800502 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:44.829375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:44.829488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:44.848472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:44.848593 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:44.859372 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:44.859596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:44.891570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:44.913853 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:45.453334 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:44.914221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:44.954656 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:44.954786 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:45.006568 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:45.006713 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:45.038486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:45.038615 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:45.089639 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:45.089728 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:45.120453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:45.120698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:45.177419 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:45.207221 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:45.207465 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:45.219695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:45.219776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:45.255104 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:45.255248 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:45.274889 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:45.275020 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:45.287151 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:45.326403 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:45.401582 systemd[1]: Switching root. Dec 13 01:26:45.785296 systemd-journald[183]: Journal stopped Dec 13 01:26:36.098472 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:26:36.098522 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:36.098541 kernel: BIOS-provided physical RAM map: Dec 13 01:26:36.098555 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:26:36.098568 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:26:36.098581 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:26:36.098598 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:26:36.098616 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:26:36.098630 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:26:36.098645 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:26:36.098660 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:26:36.098675 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:26:36.098689 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:26:36.098704 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:26:36.098725 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:26:36.098741 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:26:36.098757 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:26:36.098773 kernel: NX (Execute Disable) protection: active Dec 13 01:26:36.098789 kernel: APIC: Static calls initialized Dec 13 01:26:36.098805 kernel: efi: EFI v2.7 by EDK II Dec 13 01:26:36.098821 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:26:36.098837 kernel: SMBIOS 2.4 present. Dec 13 01:26:36.098854 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:26:36.098880 kernel: Hypervisor detected: KVM Dec 13 01:26:36.098900 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:26:36.098917 kernel: kvm-clock: using sched offset of 11830288101 cycles Dec 13 01:26:36.098934 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:26:36.098951 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:26:36.098968 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:26:36.098985 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:26:36.099001 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:26:36.099017 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:26:36.099078 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:26:36.099100 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:26:36.099116 kernel: Using GB pages for direct mapping Dec 13 01:26:36.099132 kernel: Secure boot disabled Dec 13 01:26:36.099149 kernel: ACPI: Early table checksum verification disabled Dec 13 01:26:36.099166 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:26:36.099183 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:26:36.099201 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:26:36.099224 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:26:36.099245 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:26:36.099263 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:26:36.099280 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:26:36.099297 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:26:36.099315 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:26:36.099332 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:26:36.099353 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:26:36.099371 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:26:36.099388 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:26:36.099405 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:26:36.099423 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:26:36.099440 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:26:36.099456 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:26:36.099474 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:26:36.099491 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:26:36.099512 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:26:36.099529 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:26:36.099547 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:26:36.099564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:26:36.099581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:26:36.099598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:26:36.099616 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:26:36.099634 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:26:36.099651 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Dec 13 01:26:36.099673 kernel: Zone ranges: Dec 13 01:26:36.099690 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:26:36.099707 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:26:36.099723 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:26:36.099740 kernel: Movable zone start for each node Dec 13 01:26:36.099757 kernel: Early memory node ranges Dec 13 01:26:36.099775 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:26:36.099792 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:26:36.099809 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:26:36.099830 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:26:36.099848 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:26:36.099873 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:26:36.099890 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:26:36.099908 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:26:36.099924 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:26:36.099939 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:26:36.099955 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:26:36.099971 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:26:36.099992 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:26:36.100009 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:26:36.100026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:26:36.100067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:26:36.100085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:26:36.100101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:26:36.100118 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:26:36.100134 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:26:36.100152 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:26:36.100175 kernel: Booting paravirtualized kernel on KVM Dec 13 01:26:36.100193 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:26:36.100211 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:26:36.100229 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:26:36.100247 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:26:36.100266 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:26:36.100283 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:26:36.100301 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:26:36.100321 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:36.100344 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:26:36.100362 kernel: random: crng init done Dec 13 01:26:36.100379 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:26:36.100397 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:26:36.100415 kernel: Fallback order for Node 0: 0 Dec 13 01:26:36.100433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:26:36.100451 kernel: Policy zone: Normal Dec 13 01:26:36.100468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:26:36.100490 kernel: software IO TLB: area num 2. Dec 13 01:26:36.100508 kernel: Memory: 7513376K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346948K reserved, 0K cma-reserved) Dec 13 01:26:36.100526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:26:36.100544 kernel: Kernel/User page tables isolation: enabled Dec 13 01:26:36.100562 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:26:36.100580 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:26:36.100598 kernel: Dynamic Preempt: voluntary Dec 13 01:26:36.100615 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:26:36.100640 kernel: rcu: RCU event tracing is enabled. Dec 13 01:26:36.100675 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:26:36.100695 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:26:36.100715 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:26:36.100738 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:26:36.100757 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:26:36.100776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:26:36.100796 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:26:36.100815 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:26:36.100835 kernel: Console: colour dummy device 80x25 Dec 13 01:26:36.100857 kernel: printk: console [ttyS0] enabled Dec 13 01:26:36.100885 kernel: ACPI: Core revision 20230628 Dec 13 01:26:36.100904 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:26:36.100922 kernel: x2apic enabled Dec 13 01:26:36.100942 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:26:36.100961 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:26:36.100981 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:26:36.101000 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:26:36.101024 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:26:36.101058 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:26:36.101078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:26:36.101097 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:26:36.101116 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:26:36.101135 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:26:36.101155 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:26:36.101174 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:26:36.101192 kernel: RETBleed: Mitigation: IBRS Dec 13 01:26:36.101216 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:26:36.101235 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:26:36.101254 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:26:36.101273 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:26:36.101293 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:26:36.101312 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:26:36.101331 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:26:36.101349 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:26:36.101367 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:26:36.101391 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:26:36.101411 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:26:36.101429 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:26:36.101447 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:26:36.101466 kernel: landlock: Up and running. Dec 13 01:26:36.101484 kernel: SELinux: Initializing. Dec 13 01:26:36.101503 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.101521 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.101540 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:26:36.101562 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:36.101580 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:36.101599 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:26:36.101619 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:26:36.101638 kernel: signal: max sigframe size: 1776 Dec 13 01:26:36.101657 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:26:36.101677 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:26:36.101696 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:26:36.101719 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:26:36.101736 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:26:36.101755 kernel: .... node #0, CPUs: #1 Dec 13 01:26:36.101775 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:26:36.101796 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:26:36.101815 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:26:36.101835 kernel: smpboot: Max logical packages: 1 Dec 13 01:26:36.101855 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:26:36.101882 kernel: devtmpfs: initialized Dec 13 01:26:36.101905 kernel: x86/mm: Memory block size: 128MB Dec 13 01:26:36.101925 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:26:36.101944 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:26:36.101964 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:26:36.101983 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:26:36.102002 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:26:36.102021 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:26:36.102062 kernel: audit: type=2000 audit(1734053195.059:1): state=initialized audit_enabled=0 res=1 Dec 13 01:26:36.102079 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:26:36.102101 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:26:36.102117 kernel: cpuidle: using governor menu Dec 13 01:26:36.102134 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:26:36.102153 kernel: dca service started, version 1.12.1 Dec 13 01:26:36.102171 kernel: PCI: Using configuration type 1 for base access Dec 13 01:26:36.102190 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:26:36.102209 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:26:36.102226 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:26:36.102245 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:26:36.102266 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:26:36.102283 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:26:36.102301 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:26:36.102318 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:26:36.102335 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:26:36.102354 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:26:36.102370 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:26:36.102389 kernel: ACPI: Interpreter enabled Dec 13 01:26:36.102407 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:26:36.102430 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:26:36.102448 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:26:36.102467 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:26:36.102485 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:26:36.102504 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:26:36.102753 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:26:36.102965 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:26:36.103176 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:26:36.103201 kernel: PCI host bridge to bus 0000:00 Dec 13 01:26:36.103383 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:26:36.103555 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:26:36.103719 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:26:36.103891 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:26:36.104086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:26:36.104298 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:26:36.104495 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:26:36.104697 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:26:36.104893 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:26:36.105107 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:26:36.105300 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:26:36.105527 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:26:36.105721 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:26:36.105933 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:26:36.106132 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:26:36.106319 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:26:36.106500 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:26:36.106679 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:26:36.106708 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:26:36.106737 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:26:36.106757 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:26:36.106776 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:26:36.106796 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:26:36.106816 kernel: iommu: Default domain type: Translated Dec 13 01:26:36.106836 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:26:36.106855 kernel: efivars: Registered efivars operations Dec 13 01:26:36.106901 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:26:36.106927 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:26:36.106946 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:26:36.106965 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:26:36.106984 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:26:36.107002 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:26:36.107022 kernel: vgaarb: loaded Dec 13 01:26:36.107095 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:26:36.107114 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:26:36.107133 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:26:36.107155 kernel: pnp: PnP ACPI init Dec 13 01:26:36.107172 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:26:36.107192 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:26:36.107209 kernel: NET: Registered PF_INET protocol family Dec 13 01:26:36.107228 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:26:36.107248 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:26:36.107266 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:26:36.107286 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:26:36.107305 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:26:36.107330 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:26:36.107350 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.107369 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:26:36.107389 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:26:36.107409 kernel: NET: Registered PF_XDP protocol family Dec 13 01:26:36.107599 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:26:36.107768 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:26:36.107945 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:26:36.108143 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:26:36.108335 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:26:36.108362 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:26:36.108383 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:26:36.108403 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:26:36.108422 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:26:36.108441 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:26:36.108460 kernel: clocksource: Switched to clocksource tsc Dec 13 01:26:36.108485 kernel: Initialise system trusted keyrings Dec 13 01:26:36.108504 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:26:36.108521 kernel: Key type asymmetric registered Dec 13 01:26:36.108538 kernel: Asymmetric key parser 'x509' registered Dec 13 01:26:36.108556 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:26:36.108574 kernel: io scheduler mq-deadline registered Dec 13 01:26:36.108590 kernel: io scheduler kyber registered Dec 13 01:26:36.108613 kernel: io scheduler bfq registered Dec 13 01:26:36.108634 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:26:36.108663 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:26:36.108873 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:26:36.108898 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:26:36.109143 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:26:36.109168 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:26:36.109349 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:26:36.109372 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:26:36.109390 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:26:36.109409 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:26:36.109434 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:26:36.109453 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:26:36.109635 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:26:36.109660 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:26:36.109678 kernel: i8042: Warning: Keylock active Dec 13 01:26:36.109696 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:26:36.109715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:26:36.109898 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:26:36.110098 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:26:36.110267 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:26:35 UTC (1734053195) Dec 13 01:26:36.110432 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:26:36.110455 kernel: intel_pstate: CPU model not supported Dec 13 01:26:36.110473 kernel: pstore: Using crash dump compression: deflate Dec 13 01:26:36.110492 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:26:36.110511 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:26:36.110535 kernel: Segment Routing with IPv6 Dec 13 01:26:36.110553 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:26:36.110572 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:26:36.110590 kernel: Key type dns_resolver registered Dec 13 01:26:36.110609 kernel: IPI shorthand broadcast: enabled Dec 13 01:26:36.110627 kernel: sched_clock: Marking stable (853003886, 148926582)->(1028101423, -26170955) Dec 13 01:26:36.110646 kernel: registered taskstats version 1 Dec 13 01:26:36.110665 kernel: Loading compiled-in X.509 certificates Dec 13 01:26:36.110683 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:26:36.110702 kernel: Key type .fscrypt registered Dec 13 01:26:36.110723 kernel: Key type fscrypt-provisioning registered Dec 13 01:26:36.110742 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:26:36.110761 kernel: ima: No architecture policies found Dec 13 01:26:36.110779 kernel: clk: Disabling unused clocks Dec 13 01:26:36.110797 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:26:36.110816 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:26:36.110834 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:26:36.110853 kernel: Run /init as init process Dec 13 01:26:36.110884 kernel: with arguments: Dec 13 01:26:36.110902 kernel: /init Dec 13 01:26:36.110920 kernel: with environment: Dec 13 01:26:36.110937 kernel: HOME=/ Dec 13 01:26:36.110955 kernel: TERM=linux Dec 13 01:26:36.110973 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:26:36.110992 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:26:36.111014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:36.111060 systemd[1]: Detected virtualization google. Dec 13 01:26:36.111080 systemd[1]: Detected architecture x86-64. Dec 13 01:26:36.111098 systemd[1]: Running in initrd. Dec 13 01:26:36.111117 systemd[1]: No hostname configured, using default hostname. Dec 13 01:26:36.111136 systemd[1]: Hostname set to . Dec 13 01:26:36.111157 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:36.111176 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:26:36.111195 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:36.111219 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:36.111239 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:26:36.111259 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:36.111278 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:26:36.111298 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:26:36.111320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:26:36.111343 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:26:36.111363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:36.111383 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:36.111422 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:36.111446 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:36.111466 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:36.111486 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:36.111510 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:36.111530 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:36.111551 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:36.111571 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:36.111591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:36.111611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:36.111631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:36.111651 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:36.111675 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:26:36.111695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:36.111715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:26:36.111735 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:26:36.111754 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:36.111775 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:36.111795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:36.111845 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:26:36.111899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:36.111919 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:36.111939 systemd-journald[183]: Journal started Dec 13 01:26:36.111983 systemd-journald[183]: Runtime Journal (/run/log/journal/5b2310dd10114d79aa23a7ced83e8694) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:26:36.119054 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:36.118588 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:26:36.120575 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:26:36.138254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:36.142469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:36.152826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:36.168052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:26:36.170321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:36.175973 kernel: Bridge firewalling registered Dec 13 01:26:36.172088 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:26:36.174673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:36.191242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:36.191655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:36.191984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:36.201112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:36.214237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:36.222966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:36.229091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:36.239847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:36.254330 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:26:36.277543 dracut-cmdline[218]: dracut-dracut-053 Dec 13 01:26:36.281134 systemd-resolved[214]: Positive Trust Anchors: Dec 13 01:26:36.281147 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:36.290164 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:36.281207 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:36.286767 systemd-resolved[214]: Defaulting to hostname 'linux'. Dec 13 01:26:36.288491 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:36.301365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:36.382071 kernel: SCSI subsystem initialized Dec 13 01:26:36.393072 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:26:36.405069 kernel: iscsi: registered transport (tcp) Dec 13 01:26:36.428088 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:26:36.428171 kernel: QLogic iSCSI HBA Driver Dec 13 01:26:36.479021 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:36.489216 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:26:36.517668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:26:36.517753 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:26:36.518951 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:26:36.562070 kernel: raid6: avx2x4 gen() 18011 MB/s Dec 13 01:26:36.579059 kernel: raid6: avx2x2 gen() 18089 MB/s Dec 13 01:26:36.596612 kernel: raid6: avx2x1 gen() 14163 MB/s Dec 13 01:26:36.596651 kernel: raid6: using algorithm avx2x2 gen() 18089 MB/s Dec 13 01:26:36.614504 kernel: raid6: .... xor() 17430 MB/s, rmw enabled Dec 13 01:26:36.614567 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:26:36.637073 kernel: xor: automatically using best checksumming function avx Dec 13 01:26:36.811070 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:26:36.824658 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:36.831256 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:36.861964 systemd-udevd[400]: Using default interface naming scheme 'v255'. Dec 13 01:26:36.868842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:36.880222 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:26:36.909022 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Dec 13 01:26:36.945839 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:36.961306 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:37.042240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:37.053310 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:26:37.094013 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:37.103589 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:37.112150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:37.117504 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:37.132276 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:26:37.170768 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:37.178176 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:26:37.199759 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:26:37.199831 kernel: AES CTR mode by8 optimization enabled Dec 13 01:26:37.216452 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:37.219873 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:37.232443 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:26:37.232897 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:37.236518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:37.237369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:37.285321 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:37.290182 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:26:37.305640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:37.335497 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:26:37.350280 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:26:37.350596 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:26:37.350828 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:26:37.351076 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:26:37.351302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:26:37.351331 kernel: GPT:17805311 != 25165823 Dec 13 01:26:37.351355 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:26:37.351379 kernel: GPT:17805311 != 25165823 Dec 13 01:26:37.351401 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:26:37.351425 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:37.351463 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:26:37.340250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:37.356676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:37.415590 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Dec 13 01:26:37.415665 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (463) Dec 13 01:26:37.415563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:37.439955 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:26:37.447332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:26:37.454489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:26:37.460643 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:26:37.460900 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:26:37.476237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:26:37.499702 disk-uuid[550]: Primary Header is updated. Dec 13 01:26:37.499702 disk-uuid[550]: Secondary Entries is updated. Dec 13 01:26:37.499702 disk-uuid[550]: Secondary Header is updated. Dec 13 01:26:37.514179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:37.535064 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:37.557056 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:38.548076 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:26:38.548662 disk-uuid[551]: The operation has completed successfully. Dec 13 01:26:38.619813 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:26:38.619961 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:26:38.650253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:26:38.680470 sh[568]: Success Dec 13 01:26:38.704154 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:26:38.792647 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:26:38.799970 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:26:38.818684 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:26:38.866380 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:26:38.866467 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:38.866493 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:26:38.875809 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:26:38.888361 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:26:38.922066 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:26:38.927355 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:26:38.928301 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:26:38.934248 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:26:38.997503 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:38.997546 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:38.997568 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:38.955581 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:26:39.022211 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:26:39.022256 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:39.042138 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:39.059691 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:26:39.088304 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:26:39.125761 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:39.132311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:39.217843 systemd-networkd[750]: lo: Link UP Dec 13 01:26:39.218258 systemd-networkd[750]: lo: Gained carrier Dec 13 01:26:39.220653 systemd-networkd[750]: Enumeration completed Dec 13 01:26:39.221228 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:39.221234 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:39.222966 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:39.224604 systemd-networkd[750]: eth0: Link UP Dec 13 01:26:39.295107 ignition[707]: Ignition 2.19.0 Dec 13 01:26:39.224611 systemd-networkd[750]: eth0: Gained carrier Dec 13 01:26:39.295119 ignition[707]: Stage: fetch-offline Dec 13 01:26:39.224625 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:39.295175 ignition[707]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.250137 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.34/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:26:39.295186 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.292379 systemd[1]: Reached target network.target - Network. Dec 13 01:26:39.295351 ignition[707]: parsed url from cmdline: "" Dec 13 01:26:39.301815 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:39.295357 ignition[707]: no config URL provided Dec 13 01:26:39.332413 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:26:39.295364 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:39.421880 unknown[761]: fetched base config from "system" Dec 13 01:26:39.295375 ignition[707]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:39.421919 unknown[761]: fetched base config from "system" Dec 13 01:26:39.295384 ignition[707]: failed to fetch config: resource requires networking Dec 13 01:26:39.421931 unknown[761]: fetched user config from "gcp" Dec 13 01:26:39.296217 ignition[707]: Ignition finished successfully Dec 13 01:26:39.425964 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:26:39.400698 ignition[761]: Ignition 2.19.0 Dec 13 01:26:39.449494 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:26:39.400711 ignition[761]: Stage: fetch Dec 13 01:26:39.529472 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:26:39.400951 ignition[761]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.568399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:26:39.400965 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.635481 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:26:39.401125 ignition[761]: parsed url from cmdline: "" Dec 13 01:26:39.663593 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:39.401131 ignition[761]: no config URL provided Dec 13 01:26:39.703464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:39.401139 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:39.743463 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:39.401151 ignition[761]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:39.761797 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:39.401175 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:26:39.787556 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:39.408099 ignition[761]: GET result: OK Dec 13 01:26:39.823425 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:26:39.408228 ignition[761]: parsing config with SHA512: 454b14bf67d9fdedf446c2071c43c3378ea9f65cb4d5538cda7676a2def7993870a4c0ef0d0002bf82ad1bc038efcc772acd9e48ff71a0a0ba7888b6135c5fc3 Dec 13 01:26:39.423834 ignition[761]: fetch: fetch complete Dec 13 01:26:39.423847 ignition[761]: fetch: fetch passed Dec 13 01:26:39.423976 ignition[761]: Ignition finished successfully Dec 13 01:26:39.524439 ignition[768]: Ignition 2.19.0 Dec 13 01:26:39.524460 ignition[768]: Stage: kargs Dec 13 01:26:39.524758 ignition[768]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.524774 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.525872 ignition[768]: kargs: kargs passed Dec 13 01:26:39.525945 ignition[768]: Ignition finished successfully Dec 13 01:26:39.629710 ignition[775]: Ignition 2.19.0 Dec 13 01:26:39.629721 ignition[775]: Stage: disks Dec 13 01:26:39.629987 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:39.630000 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:39.634110 ignition[775]: disks: disks passed Dec 13 01:26:39.634232 ignition[775]: Ignition finished successfully Dec 13 01:26:39.933665 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:26:40.114696 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:26:40.138209 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:26:40.353100 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:26:40.353964 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:26:40.355050 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:40.431351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:40.460223 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:26:40.472767 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:26:40.472839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:26:40.472883 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:40.638307 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (791) Dec 13 01:26:40.638368 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:40.638393 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:40.638520 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:40.638595 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:26:40.638621 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:40.485237 systemd-networkd[750]: eth0: Gained IPv6LL Dec 13 01:26:40.630559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:40.648585 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:26:40.673528 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:26:40.903693 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:40.915214 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:40.929675 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:40.946259 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:41.239992 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:41.268234 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:41.298545 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:41.326369 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:41.338681 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:41.392901 ignition[905]: INFO : Ignition 2.19.0 Dec 13 01:26:41.392901 ignition[905]: INFO : Stage: mount Dec 13 01:26:41.392901 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:41.392901 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:41.461272 ignition[905]: INFO : mount: mount passed Dec 13 01:26:41.461272 ignition[905]: INFO : Ignition finished successfully Dec 13 01:26:41.395955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:41.438414 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:41.469317 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:41.527336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:41.614122 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Dec 13 01:26:41.640059 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:41.640186 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:41.640213 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:41.675940 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:26:41.676080 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:41.680272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:41.739178 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:26:41.739178 ignition[934]: INFO : Stage: files Dec 13 01:26:41.758349 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:41.758349 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:41.758349 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:41.758349 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:41.758349 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:41.851778 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:41.851778 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:26:41.763458 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 01:26:41.995256 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:26:42.074595 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:42.074595 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.126258 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:26:42.415369 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:26:42.917979 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:26:42.917979 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:42.936491 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:42.936491 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:42.936491 ignition[934]: INFO : files: files passed Dec 13 01:26:42.936491 ignition[934]: INFO : Ignition finished successfully Dec 13 01:26:42.922941 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:42.962407 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:43.027303 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:43.033892 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:43.227254 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:43.227254 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:43.034020 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:43.276510 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:43.108918 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:43.145024 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:43.182346 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:43.271833 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:43.271972 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:43.287495 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:43.319493 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:43.350471 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:43.357547 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:43.480661 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:43.512337 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:43.564652 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:43.591495 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:43.592121 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:43.627762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:43.628126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:43.682323 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:43.682760 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:43.700689 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:43.719808 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:43.771436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:43.772110 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:43.790664 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:43.825661 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:43.837837 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:43.858891 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:43.878723 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:43.878980 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:43.917849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:43.932905 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:43.974686 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:43.974968 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:44.008437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:44.008686 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:44.059726 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:44.060018 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:44.085748 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:44.086070 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:44.105434 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:44.140595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:44.207331 ignition[986]: INFO : Ignition 2.19.0 Dec 13 01:26:44.207331 ignition[986]: INFO : Stage: umount Dec 13 01:26:44.207331 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:44.207331 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:26:44.207331 ignition[986]: INFO : umount: umount passed Dec 13 01:26:44.207331 ignition[986]: INFO : Ignition finished successfully Dec 13 01:26:44.141191 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:44.176976 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:44.215496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:44.215759 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:44.230862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:44.231150 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:44.286894 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:44.287931 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:44.288076 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:44.290056 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:44.290195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:44.330444 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:44.330603 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:44.372222 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:44.372392 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:44.391419 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:44.391503 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:44.412345 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:26:44.412455 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:26:44.437382 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:44.455260 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:44.455492 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:44.477352 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:44.492247 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:44.494181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:44.511223 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:44.526244 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:44.541415 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:44.541492 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:44.549498 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:44.549568 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:44.584401 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:44.584501 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:44.609423 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:44.609507 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:44.617430 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:44.617506 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:44.634703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:44.640104 systemd-networkd[750]: eth0: DHCPv6 lease lost Dec 13 01:26:44.651526 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:44.684921 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:44.685102 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:44.704948 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:44.705434 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:44.732013 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:44.732105 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:44.777295 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:44.800255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:44.800502 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:44.829375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:44.829488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:44.848472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:44.848593 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:44.859372 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:44.859596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:44.891570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:44.913853 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:45.453334 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:44.914221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:44.954656 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:44.954786 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:45.006568 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:45.006713 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:45.038486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:45.038615 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:45.089639 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:45.089728 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:45.120453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:45.120698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:45.177419 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:45.207221 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:45.207465 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:45.219695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:45.219776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:45.255104 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:45.255248 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:45.274889 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:45.275020 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:45.287151 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:45.326403 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:45.401582 systemd[1]: Switching root. Dec 13 01:26:45.785296 systemd-journald[183]: Journal stopped Dec 13 01:26:49.200933 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:26:49.201022 kernel: SELinux: policy capability open_perms=1 Dec 13 01:26:49.201077 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:26:49.201095 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:26:49.201114 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:26:49.201132 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:26:49.201157 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:26:49.201185 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:26:49.201209 kernel: audit: type=1403 audit(1734053206.205:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:26:49.201236 systemd[1]: Successfully loaded SELinux policy in 127.336ms. Dec 13 01:26:49.201262 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.311ms. Dec 13 01:26:49.201286 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:49.201306 systemd[1]: Detected virtualization google. Dec 13 01:26:49.201327 systemd[1]: Detected architecture x86-64. Dec 13 01:26:49.201368 systemd[1]: Detected first boot. Dec 13 01:26:49.201392 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:49.201416 zram_generator::config[1027]: No configuration found. Dec 13 01:26:49.201439 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:26:49.201463 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:26:49.201491 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:26:49.201514 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:49.201539 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:26:49.201559 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:26:49.201583 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:26:49.201604 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:26:49.201626 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:26:49.201652 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:26:49.201677 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:26:49.201698 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:26:49.201724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:49.201748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:49.201768 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:26:49.201789 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:26:49.201811 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:26:49.201837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:49.201859 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:26:49.201879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:49.201900 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:26:49.201920 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:26:49.201941 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:49.201970 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:26:49.201992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:49.202015 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:49.202092 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:49.202113 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:49.202132 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:26:49.202153 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:26:49.202175 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:49.202197 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:49.202220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:49.202256 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:26:49.202281 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:26:49.202305 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:26:49.202341 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:26:49.202367 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:49.202397 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:26:49.202422 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:26:49.202446 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:26:49.202470 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:26:49.202496 systemd[1]: Reached target machines.target - Containers. Dec 13 01:26:49.202520 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:26:49.202545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:49.202573 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:49.202603 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:26:49.202629 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:49.202653 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:49.202680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:49.202702 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:26:49.202728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:49.202754 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:26:49.202780 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:26:49.202809 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:26:49.202834 kernel: ACPI: bus type drm_connector registered Dec 13 01:26:49.202860 kernel: fuse: init (API version 7.39) Dec 13 01:26:49.202882 kernel: loop: module loaded Dec 13 01:26:49.202905 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:26:49.202931 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:26:49.202956 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:49.202980 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:49.203004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:26:49.203233 systemd-journald[1114]: Collecting audit messages is disabled. Dec 13 01:26:49.203364 systemd-journald[1114]: Journal started Dec 13 01:26:49.203422 systemd-journald[1114]: Runtime Journal (/run/log/journal/5b50b6065a824838b1146dfc316c6658) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:26:49.212237 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:26:47.611394 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:26:47.642913 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:26:47.643559 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:26:49.262081 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:49.262193 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:26:49.262225 systemd[1]: Stopped verity-setup.service. Dec 13 01:26:49.301064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:49.311148 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:49.322726 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:26:49.333515 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:26:49.343542 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:26:49.353507 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:26:49.364470 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:26:49.374473 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:26:49.384783 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:26:49.397750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:49.411782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:26:49.412044 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:26:49.424765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:49.425102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:49.439872 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:49.440206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:49.454822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:49.455124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:49.471537 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:26:49.471972 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:26:49.486795 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:49.487078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:49.502721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:49.514742 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:26:49.530746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:26:49.542683 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:49.570954 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:26:49.587241 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:26:49.616216 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:26:49.628389 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:26:49.628700 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:49.643390 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:26:49.668478 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:26:49.683170 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:26:49.694704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:49.704505 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:26:49.726958 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:26:49.739299 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:49.749321 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:26:49.761461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:49.765929 systemd-journald[1114]: Time spent on flushing to /var/log/journal/5b50b6065a824838b1146dfc316c6658 is 46.784ms for 926 entries. Dec 13 01:26:49.765929 systemd-journald[1114]: System Journal (/var/log/journal/5b50b6065a824838b1146dfc316c6658) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:26:49.882305 systemd-journald[1114]: Received client request to flush runtime journal. Dec 13 01:26:49.781480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:49.802498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:26:49.832503 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:26:49.856403 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:26:49.874673 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:26:49.889691 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:26:49.905138 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:26:49.922450 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:26:49.939601 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:26:49.946743 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:26:49.961143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:49.996153 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:26:49.997790 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:26:50.037267 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:26:50.054728 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:26:50.077080 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:26:50.096727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:50.125077 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 01:26:50.142248 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:26:50.143835 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:26:50.195105 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Dec 13 01:26:50.195137 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Dec 13 01:26:50.211669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:50.302121 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:26:50.470221 kernel: loop3: detected capacity change from 0 to 54824 Dec 13 01:26:50.586065 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:26:50.676401 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 01:26:50.766356 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:26:50.851195 kernel: loop7: detected capacity change from 0 to 54824 Dec 13 01:26:50.900114 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 13 01:26:50.901054 (sd-merge)[1169]: Merged extensions into '/usr'. Dec 13 01:26:50.911822 systemd[1]: Reloading requested from client PID 1145 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:26:50.912296 systemd[1]: Reloading... Dec 13 01:26:51.080056 zram_generator::config[1193]: No configuration found. Dec 13 01:26:51.340108 ldconfig[1140]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:26:51.371399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:51.455063 systemd[1]: Reloading finished in 538 ms. Dec 13 01:26:51.488485 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:26:51.498634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:26:51.510653 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:26:51.536311 systemd[1]: Starting ensure-sysext.service... Dec 13 01:26:51.548261 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:51.572321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:51.589086 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:26:51.589128 systemd[1]: Reloading... Dec 13 01:26:51.600448 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:26:51.605371 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:26:51.607453 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:26:51.610169 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:26:51.610334 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:26:51.619524 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:51.620259 systemd-tmpfiles[1237]: Skipping /boot Dec 13 01:26:51.654732 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Dec 13 01:26:51.659430 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:51.663100 systemd-tmpfiles[1237]: Skipping /boot Dec 13 01:26:51.762059 zram_generator::config[1276]: No configuration found. Dec 13 01:26:51.936078 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1293) Dec 13 01:26:51.977770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:52.017095 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1293) Dec 13 01:26:52.097072 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 01:26:52.114079 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:26:52.127070 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:26:52.127180 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 01:26:52.145066 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:26:52.156916 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:26:52.158178 systemd[1]: Reloading finished in 568 ms. Dec 13 01:26:52.162062 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1297) Dec 13 01:26:52.191277 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:52.207338 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:26:52.219774 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:52.301266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:52.304542 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:26:52.310083 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:26:52.313457 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:52.333367 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:26:52.345431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:52.352346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:52.370251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:52.389156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:52.397609 augenrules[1357]: No rules Dec 13 01:26:52.400390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:52.406496 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:26:52.428197 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:52.447548 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:52.463445 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:26:52.474172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:52.481052 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:52.491861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:52.492097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:52.503813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:26:52.515791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:52.516014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:52.527798 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:52.528019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:52.538797 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:26:52.553125 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:26:52.583889 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:26:52.600328 systemd[1]: Finished ensure-sysext.service. Dec 13 01:26:52.609672 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:26:52.625200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:52.625492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:52.630260 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:26:52.652160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:52.671107 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:52.673235 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:52.695233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:52.712286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:52.732274 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:26:52.741321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:52.743184 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:26:52.754243 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:26:52.772386 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:26:52.789304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:26:52.796323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:52.810186 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:26:52.810256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:52.815910 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:26:52.827781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:52.829178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:52.840660 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:52.841599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:52.842152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:52.842374 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:52.842852 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:52.843980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:52.849958 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:26:52.851158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:26:52.851884 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:26:52.869142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:52.876340 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:26:52.877170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:52.877269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:52.883256 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:26:52.894963 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 13 01:26:52.920738 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:52.993620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:53.005746 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:26:53.024561 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 13 01:26:53.029729 systemd-networkd[1363]: lo: Link UP Dec 13 01:26:53.029747 systemd-networkd[1363]: lo: Gained carrier Dec 13 01:26:53.032008 systemd-networkd[1363]: Enumeration completed Dec 13 01:26:53.032599 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:53.032610 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:53.033760 systemd-networkd[1363]: eth0: Link UP Dec 13 01:26:53.033773 systemd-networkd[1363]: eth0: Gained carrier Dec 13 01:26:53.033796 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:53.036742 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:53.041290 systemd-networkd[1363]: eth0: DHCPv4 address 10.128.0.34/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:26:53.047965 systemd-resolved[1364]: Positive Trust Anchors: Dec 13 01:26:53.047984 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:53.048095 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:53.055341 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:26:53.055867 systemd-resolved[1364]: Defaulting to hostname 'linux'. Dec 13 01:26:53.066368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:53.076574 systemd[1]: Reached target network.target - Network. Dec 13 01:26:53.085177 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:53.096203 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:53.106333 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:26:53.117252 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:26:53.128409 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:26:53.138336 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:26:53.149194 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:26:53.160180 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:26:53.160242 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:53.168165 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:53.178132 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:26:53.189865 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:26:53.207901 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:26:53.219092 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:26:53.229370 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:53.239210 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:53.247240 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:53.247292 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:53.259200 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:26:53.274304 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:26:53.296277 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:26:53.337196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:26:53.345575 jq[1427]: false Dec 13 01:26:53.358291 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:26:53.367314 coreos-metadata[1425]: Dec 13 01:26:53.367 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 01:26:53.368196 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:26:53.372069 coreos-metadata[1425]: Dec 13 01:26:53.370 INFO Fetch successful Dec 13 01:26:53.372069 coreos-metadata[1425]: Dec 13 01:26:53.370 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 01:26:53.372069 coreos-metadata[1425]: Dec 13 01:26:53.370 INFO Fetch successful Dec 13 01:26:53.372069 coreos-metadata[1425]: Dec 13 01:26:53.371 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 01:26:53.372069 coreos-metadata[1425]: Dec 13 01:26:53.372 INFO Fetch successful Dec 13 01:26:53.372370 coreos-metadata[1425]: Dec 13 01:26:53.372 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 01:26:53.372781 coreos-metadata[1425]: Dec 13 01:26:53.372 INFO Fetch successful Dec 13 01:26:53.375238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:26:53.394308 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:26:53.405178 extend-filesystems[1430]: Found loop4 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found loop5 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found loop6 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found loop7 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda1 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda2 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda3 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found usr Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda4 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda6 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda7 Dec 13 01:26:53.405178 extend-filesystems[1430]: Found sda9 Dec 13 01:26:53.405178 extend-filesystems[1430]: Checking size of /dev/sda9 Dec 13 01:26:53.576462 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 01:26:53.576525 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 01:26:53.576556 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1288) Dec 13 01:26:53.409228 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:26:53.576807 extend-filesystems[1430]: Resized partition /dev/sda9 Dec 13 01:26:53.431707 dbus-daemon[1426]: [system] SELinux support is enabled Dec 13 01:26:53.464986 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:26:53.585997 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:26:53.585997 extend-filesystems[1445]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:26:53.585997 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 01:26:53.585997 extend-filesystems[1445]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 01:26:53.436213 dbus-daemon[1426]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1363 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: ---------------------------------------------------- Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: corporation. Support and training for ntp-4 are Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: available at https://www.nwtime.org/support Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: ---------------------------------------------------- Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: proto: precision = 0.097 usec (-23) Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: basedate set to 2024-11-30 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: gps base set to 2024-12-01 (week 2343) Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Listen normally on 3 eth0 10.128.0.34:123 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Listen normally on 4 lo [::1]:123 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:22%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:22%2#123 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:22%2 Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:26:53.663286 ntpd[1433]: 13 Dec 01:26:53 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:26:53.497261 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:26:53.664795 extend-filesystems[1430]: Resized filesystem in /dev/sda9 Dec 13 01:26:53.543135 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:26:53.523305 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:26:53.543166 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:26:53.553011 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 01:26:53.543191 ntpd[1433]: ---------------------------------------------------- Dec 13 01:26:53.557469 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:26:53.543205 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:26:53.676792 update_engine[1455]: I20241213 01:26:53.637841 1455 main.cc:92] Flatcar Update Engine starting Dec 13 01:26:53.676792 update_engine[1455]: I20241213 01:26:53.639828 1455 update_check_scheduler.cc:74] Next update check in 6m41s Dec 13 01:26:53.560590 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:26:53.543218 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:26:53.678537 jq[1459]: true Dec 13 01:26:53.604634 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:26:53.543231 ntpd[1433]: corporation. Support and training for ntp-4 are Dec 13 01:26:53.637224 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:26:53.543245 ntpd[1433]: available at https://www.nwtime.org/support Dec 13 01:26:53.663647 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:26:53.543257 ntpd[1433]: ---------------------------------------------------- Dec 13 01:26:53.663951 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:26:53.545313 ntpd[1433]: proto: precision = 0.097 usec (-23) Dec 13 01:26:53.664454 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:26:53.547328 ntpd[1433]: basedate set to 2024-11-30 Dec 13 01:26:53.665100 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:26:53.547352 ntpd[1433]: gps base set to 2024-12-01 (week 2343) Dec 13 01:26:53.676337 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:26:53.549656 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:26:53.676368 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 01:26:53.549703 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:26:53.676400 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:26:53.549897 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:26:53.676667 systemd-logind[1452]: New seat seat0. Dec 13 01:26:53.549935 ntpd[1433]: Listen normally on 3 eth0 10.128.0.34:123 Dec 13 01:26:53.549978 ntpd[1433]: Listen normally on 4 lo [::1]:123 Dec 13 01:26:53.550023 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:22%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:26:53.550086 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:22%2#123 Dec 13 01:26:53.550102 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:22%2 Dec 13 01:26:53.550135 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Dec 13 01:26:53.551756 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:26:53.551792 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:26:53.689567 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:26:53.699844 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:26:53.700162 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:26:53.719626 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:26:53.720099 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:26:53.752189 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:26:53.766671 jq[1463]: true Dec 13 01:26:53.787633 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:26:53.815850 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:26:53.850548 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:26:53.860064 tar[1462]: linux-amd64/helm Dec 13 01:26:53.866813 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:26:53.878236 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:26:53.878515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:26:53.878758 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:26:53.906454 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:26:53.917201 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:26:53.917477 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:26:53.939159 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:26:53.964066 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:26:53.964908 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:26:53.991425 systemd[1]: Starting sshkeys.service... Dec 13 01:26:54.049963 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:26:54.072158 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:26:54.228463 coreos-metadata[1499]: Dec 13 01:26:54.224 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 01:26:54.227160 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:26:54.226848 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:26:54.229704 dbus-daemon[1426]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1494 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:26:54.230444 coreos-metadata[1499]: Dec 13 01:26:54.230 INFO Fetch failed with 404: resource not found Dec 13 01:26:54.230444 coreos-metadata[1499]: Dec 13 01:26:54.230 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 01:26:54.231305 coreos-metadata[1499]: Dec 13 01:26:54.231 INFO Fetch successful Dec 13 01:26:54.231305 coreos-metadata[1499]: Dec 13 01:26:54.231 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 01:26:54.235469 coreos-metadata[1499]: Dec 13 01:26:54.235 INFO Fetch failed with 404: resource not found Dec 13 01:26:54.235469 coreos-metadata[1499]: Dec 13 01:26:54.235 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 01:26:54.236157 coreos-metadata[1499]: Dec 13 01:26:54.236 INFO Fetch failed with 404: resource not found Dec 13 01:26:54.236157 coreos-metadata[1499]: Dec 13 01:26:54.236 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 01:26:54.239156 coreos-metadata[1499]: Dec 13 01:26:54.239 INFO Fetch successful Dec 13 01:26:54.250895 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:26:54.251578 unknown[1499]: wrote ssh authorized keys file for user: core Dec 13 01:26:54.312401 systemd-networkd[1363]: eth0: Gained IPv6LL Dec 13 01:26:54.321479 update-ssh-keys[1507]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:26:54.323504 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:26:54.337835 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:26:54.352110 systemd[1]: Finished sshkeys.service. Dec 13 01:26:54.363792 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:26:54.389606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:54.397536 polkitd[1506]: Started polkitd version 121 Dec 13 01:26:54.412183 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:26:54.437992 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 13 01:26:54.445399 polkitd[1506]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:26:54.445522 polkitd[1506]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:26:54.456313 polkitd[1506]: Finished loading, compiling and executing 2 rules Dec 13 01:26:54.462272 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:26:54.464155 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:26:54.466517 polkitd[1506]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:26:54.486111 init.sh[1518]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 01:26:54.486111 init.sh[1518]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 01:26:54.486111 init.sh[1518]: + /usr/bin/google_instance_setup Dec 13 01:26:54.546727 systemd-hostnamed[1494]: Hostname set to (transient) Dec 13 01:26:54.547626 systemd-resolved[1364]: System hostname changed to 'ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal'. Dec 13 01:26:54.581069 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:26:54.699714 containerd[1465]: time="2024-12-13T01:26:54.697839696Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:26:54.818855 containerd[1465]: time="2024-12-13T01:26:54.817166484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:54.823401 containerd[1465]: time="2024-12-13T01:26:54.823335580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:54.823401 containerd[1465]: time="2024-12-13T01:26:54.823397059Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:26:54.823559 containerd[1465]: time="2024-12-13T01:26:54.823423817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:26:54.823686 containerd[1465]: time="2024-12-13T01:26:54.823651980Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:26:54.823768 containerd[1465]: time="2024-12-13T01:26:54.823697207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:54.823823 containerd[1465]: time="2024-12-13T01:26:54.823791633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:54.823823 containerd[1465]: time="2024-12-13T01:26:54.823816792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:54.824179 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:26:54.829059 containerd[1465]: time="2024-12-13T01:26:54.827526498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:54.829059 containerd[1465]: time="2024-12-13T01:26:54.827566087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:54.829059 containerd[1465]: time="2024-12-13T01:26:54.827592699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:54.829059 containerd[1465]: time="2024-12-13T01:26:54.827610232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:54.829059 containerd[1465]: time="2024-12-13T01:26:54.827739847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:54.829059 containerd[1465]: time="2024-12-13T01:26:54.828022794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:54.829347 containerd[1465]: time="2024-12-13T01:26:54.829315753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:54.829402 containerd[1465]: time="2024-12-13T01:26:54.829346806Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:26:54.829529 containerd[1465]: time="2024-12-13T01:26:54.829499354Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:26:54.829617 containerd[1465]: time="2024-12-13T01:26:54.829594285Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:26:54.840406 containerd[1465]: time="2024-12-13T01:26:54.840363707Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:26:54.840509 containerd[1465]: time="2024-12-13T01:26:54.840447692Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:26:54.840509 containerd[1465]: time="2024-12-13T01:26:54.840475610Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:26:54.840599 containerd[1465]: time="2024-12-13T01:26:54.840547781Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:26:54.840599 containerd[1465]: time="2024-12-13T01:26:54.840584828Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:26:54.840805 containerd[1465]: time="2024-12-13T01:26:54.840775450Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841337509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841499537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841526874Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841550159Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841585789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841610454Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841631299Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841653172Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841676744Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841698307Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841719131Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841739072Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841771323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842059 containerd[1465]: time="2024-12-13T01:26:54.841793949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.841815432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.841837832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.841863546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.841884774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.841933138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.841955629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.841977431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.842708 containerd[1465]: time="2024-12-13T01:26:54.842000409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.842019534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.843478134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.843506159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.843551977Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.843589713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.843611364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.843629086Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.844170388Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.844212351Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.844233633Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.844258891Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.844276881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.844299369Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:26:54.846307 containerd[1465]: time="2024-12-13T01:26:54.844325435Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:26:54.846957 containerd[1465]: time="2024-12-13T01:26:54.844343891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:26:54.847009 containerd[1465]: time="2024-12-13T01:26:54.844812255Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:26:54.847009 containerd[1465]: time="2024-12-13T01:26:54.844970650Z" level=info msg="Connect containerd service" Dec 13 01:26:54.847009 containerd[1465]: time="2024-12-13T01:26:54.845023080Z" level=info msg="using legacy CRI server" Dec 13 01:26:54.847009 containerd[1465]: time="2024-12-13T01:26:54.845054548Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:26:54.847009 containerd[1465]: time="2024-12-13T01:26:54.845190501Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.848474861Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.848895682Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.848967981Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.849048132Z" level=info msg="Start subscribing containerd event" Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.849102618Z" level=info msg="Start recovering state" Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.849186573Z" level=info msg="Start event monitor" Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.849202112Z" level=info msg="Start snapshots syncer" Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.849215338Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:26:54.850601 containerd[1465]: time="2024-12-13T01:26:54.849227463Z" level=info msg="Start streaming server" Dec 13 01:26:54.849422 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:26:54.854427 containerd[1465]: time="2024-12-13T01:26:54.853917621Z" level=info msg="containerd successfully booted in 0.159311s" Dec 13 01:26:55.212230 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:26:55.298877 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:26:55.317092 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:26:55.332559 systemd[1]: Started sshd@0-10.128.0.34:22-147.75.109.163:58708.service - OpenSSH per-connection server daemon (147.75.109.163:58708). Dec 13 01:26:55.373500 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:26:55.375176 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:26:55.391878 tar[1462]: linux-amd64/LICENSE Dec 13 01:26:55.391878 tar[1462]: linux-amd64/README.md Dec 13 01:26:55.395511 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:26:55.431319 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:26:55.442208 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:26:55.466613 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:26:55.483533 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:26:55.493719 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:26:55.579845 instance-setup[1524]: INFO Running google_set_multiqueue. Dec 13 01:26:55.608695 instance-setup[1524]: INFO Set channels for eth0 to 2. Dec 13 01:26:55.614333 instance-setup[1524]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 01:26:55.617128 instance-setup[1524]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 01:26:55.617212 instance-setup[1524]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 01:26:55.620318 instance-setup[1524]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 01:26:55.620393 instance-setup[1524]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 01:26:55.622838 instance-setup[1524]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 01:26:55.623705 instance-setup[1524]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 01:26:55.626297 instance-setup[1524]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 01:26:55.636935 instance-setup[1524]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:26:55.649324 instance-setup[1524]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:26:55.652171 instance-setup[1524]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 01:26:55.652222 instance-setup[1524]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 01:26:55.674447 init.sh[1518]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 01:26:55.718856 sshd[1552]: Accepted publickey for core from 147.75.109.163 port 58708 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:55.721861 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:55.745531 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:26:55.761461 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:26:55.780348 systemd-logind[1452]: New session 1 of user core. Dec 13 01:26:55.809971 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:26:55.834581 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:26:55.872775 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:26:55.893828 startup-script[1592]: INFO Starting startup scripts. Dec 13 01:26:55.901895 startup-script[1592]: INFO No startup scripts found in metadata. Dec 13 01:26:55.901970 startup-script[1592]: INFO Finished running startup scripts. Dec 13 01:26:55.943327 init.sh[1518]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 01:26:55.943327 init.sh[1518]: + daemon_pids=() Dec 13 01:26:55.943327 init.sh[1518]: + for d in accounts clock_skew network Dec 13 01:26:55.943843 init.sh[1603]: + /usr/bin/google_accounts_daemon Dec 13 01:26:55.945266 init.sh[1518]: + daemon_pids+=($!) Dec 13 01:26:55.945266 init.sh[1518]: + for d in accounts clock_skew network Dec 13 01:26:55.945266 init.sh[1518]: + daemon_pids+=($!) Dec 13 01:26:55.945266 init.sh[1518]: + for d in accounts clock_skew network Dec 13 01:26:55.945266 init.sh[1518]: + daemon_pids+=($!) Dec 13 01:26:55.945266 init.sh[1518]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 01:26:55.945266 init.sh[1518]: + /usr/bin/systemd-notify --ready Dec 13 01:26:55.949219 init.sh[1604]: + /usr/bin/google_clock_skew_daemon Dec 13 01:26:55.950349 init.sh[1605]: + /usr/bin/google_network_daemon Dec 13 01:26:55.987181 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 13 01:26:55.999055 init.sh[1518]: + wait -n 1603 1604 1605 Dec 13 01:26:56.147669 systemd[1597]: Queued start job for default target default.target. Dec 13 01:26:56.153609 systemd[1597]: Created slice app.slice - User Application Slice. Dec 13 01:26:56.153663 systemd[1597]: Reached target paths.target - Paths. Dec 13 01:26:56.153690 systemd[1597]: Reached target timers.target - Timers. Dec 13 01:26:56.158198 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:26:56.192596 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:26:56.192713 systemd[1597]: Reached target sockets.target - Sockets. Dec 13 01:26:56.192739 systemd[1597]: Reached target basic.target - Basic System. Dec 13 01:26:56.192810 systemd[1597]: Reached target default.target - Main User Target. Dec 13 01:26:56.192867 systemd[1597]: Startup finished in 308ms. Dec 13 01:26:56.193018 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:26:56.210289 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:26:56.455837 google-networking[1605]: INFO Starting Google Networking daemon. Dec 13 01:26:56.471740 systemd[1]: Started sshd@1-10.128.0.34:22-147.75.109.163:44152.service - OpenSSH per-connection server daemon (147.75.109.163:44152). Dec 13 01:26:56.485512 google-clock-skew[1604]: INFO Starting Google Clock Skew daemon. Dec 13 01:26:56.535864 google-clock-skew[1604]: INFO Clock drift token has changed: 0. Dec 13 01:26:56.544716 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:22%2]:123 Dec 13 01:26:56.546419 ntpd[1433]: 13 Dec 01:26:56 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:22%2]:123 Dec 13 01:26:56.602466 groupadd[1622]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 01:26:56.608667 groupadd[1622]: group added to /etc/gshadow: name=google-sudoers Dec 13 01:26:56.667376 groupadd[1622]: new group: name=google-sudoers, GID=1000 Dec 13 01:26:56.702786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:57.001735 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:26:57.001848 google-clock-skew[1604]: INFO Synced system time with hardware clock. Dec 13 01:26:57.002062 systemd-resolved[1364]: Clock change detected. Flushing caches. Dec 13 01:26:57.005054 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:57.012008 systemd[1]: Startup finished in 1.030s (kernel) + 10.379s (initrd) + 10.644s (userspace) = 22.054s. Dec 13 01:26:57.031122 google-accounts[1603]: INFO Starting Google Accounts daemon. Dec 13 01:26:57.062496 google-accounts[1603]: WARNING OS Login not installed. Dec 13 01:26:57.064544 google-accounts[1603]: INFO Creating a new user account for 0. Dec 13 01:26:57.068961 init.sh[1640]: useradd: invalid user name '0': use --badname to ignore Dec 13 01:26:57.069359 google-accounts[1603]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 01:26:57.116365 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 44152 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:57.119045 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:57.126882 systemd-logind[1452]: New session 2 of user core. Dec 13 01:26:57.133552 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:26:57.333686 sshd[1618]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:57.341179 systemd[1]: sshd@1-10.128.0.34:22-147.75.109.163:44152.service: Deactivated successfully. Dec 13 01:26:57.344756 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:26:57.346114 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:26:57.348707 systemd-logind[1452]: Removed session 2. Dec 13 01:26:57.391772 systemd[1]: Started sshd@2-10.128.0.34:22-147.75.109.163:44154.service - OpenSSH per-connection server daemon (147.75.109.163:44154). Dec 13 01:26:57.693762 sshd[1651]: Accepted publickey for core from 147.75.109.163 port 44154 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:57.694700 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:57.703026 systemd-logind[1452]: New session 3 of user core. Dec 13 01:26:57.707578 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:26:57.891233 kubelet[1634]: E1213 01:26:57.891165 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:57.893319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:57.893585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:57.893962 systemd[1]: kubelet.service: Consumed 1.173s CPU time. Dec 13 01:26:57.903262 sshd[1651]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:57.907750 systemd[1]: sshd@2-10.128.0.34:22-147.75.109.163:44154.service: Deactivated successfully. Dec 13 01:26:57.910429 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:26:57.912323 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:26:57.914049 systemd-logind[1452]: Removed session 3. Dec 13 01:26:57.957722 systemd[1]: Started sshd@3-10.128.0.34:22-147.75.109.163:44162.service - OpenSSH per-connection server daemon (147.75.109.163:44162). Dec 13 01:26:58.250722 sshd[1659]: Accepted publickey for core from 147.75.109.163 port 44162 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:58.252586 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:58.258811 systemd-logind[1452]: New session 4 of user core. Dec 13 01:26:58.267567 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:26:58.462579 sshd[1659]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:58.466922 systemd[1]: sshd@3-10.128.0.34:22-147.75.109.163:44162.service: Deactivated successfully. Dec 13 01:26:58.469375 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:26:58.471247 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:26:58.472664 systemd-logind[1452]: Removed session 4. Dec 13 01:26:58.518212 systemd[1]: Started sshd@4-10.128.0.34:22-147.75.109.163:44174.service - OpenSSH per-connection server daemon (147.75.109.163:44174). Dec 13 01:26:58.812619 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 44174 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:58.814411 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:58.820694 systemd-logind[1452]: New session 5 of user core. Dec 13 01:26:58.827547 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:26:59.007415 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:26:59.007953 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:59.025157 sudo[1669]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:59.068418 sshd[1666]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:59.073674 systemd[1]: sshd@4-10.128.0.34:22-147.75.109.163:44174.service: Deactivated successfully. Dec 13 01:26:59.076218 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:26:59.078210 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:26:59.080029 systemd-logind[1452]: Removed session 5. Dec 13 01:26:59.123732 systemd[1]: Started sshd@5-10.128.0.34:22-147.75.109.163:44182.service - OpenSSH per-connection server daemon (147.75.109.163:44182). Dec 13 01:26:59.424868 sshd[1674]: Accepted publickey for core from 147.75.109.163 port 44182 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:59.426768 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:59.433209 systemd-logind[1452]: New session 6 of user core. Dec 13 01:26:59.441604 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:26:59.605424 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:26:59.605903 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:59.611002 sudo[1678]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:59.624928 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:26:59.625449 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:59.646906 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:59.649208 auditctl[1681]: No rules Dec 13 01:26:59.649737 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:26:59.650014 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:59.653508 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:59.729022 augenrules[1699]: No rules Dec 13 01:26:59.730485 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:59.732505 sudo[1677]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:59.779557 sshd[1674]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:59.783950 systemd[1]: sshd@5-10.128.0.34:22-147.75.109.163:44182.service: Deactivated successfully. Dec 13 01:26:59.786215 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:26:59.788074 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:26:59.789422 systemd-logind[1452]: Removed session 6. Dec 13 01:26:59.835747 systemd[1]: Started sshd@6-10.128.0.34:22-147.75.109.163:44194.service - OpenSSH per-connection server daemon (147.75.109.163:44194). Dec 13 01:27:00.119730 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 44194 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:00.121714 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:00.128449 systemd-logind[1452]: New session 7 of user core. Dec 13 01:27:00.135583 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:27:00.298620 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:27:00.299128 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:00.735742 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:27:00.747939 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:27:01.201424 dockerd[1727]: time="2024-12-13T01:27:01.201315061Z" level=info msg="Starting up" Dec 13 01:27:01.321782 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport163018564-merged.mount: Deactivated successfully. Dec 13 01:27:01.413603 dockerd[1727]: time="2024-12-13T01:27:01.413221030Z" level=info msg="Loading containers: start." Dec 13 01:27:01.578385 kernel: Initializing XFRM netlink socket Dec 13 01:27:01.696428 systemd-networkd[1363]: docker0: Link UP Dec 13 01:27:01.713126 dockerd[1727]: time="2024-12-13T01:27:01.713061648Z" level=info msg="Loading containers: done." Dec 13 01:27:01.735308 dockerd[1727]: time="2024-12-13T01:27:01.735199246Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:27:01.735544 dockerd[1727]: time="2024-12-13T01:27:01.735384202Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:27:01.735544 dockerd[1727]: time="2024-12-13T01:27:01.735532154Z" level=info msg="Daemon has completed initialization" Dec 13 01:27:01.773747 dockerd[1727]: time="2024-12-13T01:27:01.773678627Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:27:01.774016 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:27:02.682069 containerd[1465]: time="2024-12-13T01:27:02.682017662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:27:03.219282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923632467.mount: Deactivated successfully. Dec 13 01:27:04.690862 containerd[1465]: time="2024-12-13T01:27:04.690791821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:04.692437 containerd[1465]: time="2024-12-13T01:27:04.692366386Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27982111" Dec 13 01:27:04.693448 containerd[1465]: time="2024-12-13T01:27:04.693402251Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:04.696992 containerd[1465]: time="2024-12-13T01:27:04.696928773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:04.698635 containerd[1465]: time="2024-12-13T01:27:04.698413248Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.016338923s" Dec 13 01:27:04.698635 containerd[1465]: time="2024-12-13T01:27:04.698467301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:27:04.701304 containerd[1465]: time="2024-12-13T01:27:04.701273205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:27:06.143236 containerd[1465]: time="2024-12-13T01:27:06.143172707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.144789 containerd[1465]: time="2024-12-13T01:27:06.144702377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24704091" Dec 13 01:27:06.146186 containerd[1465]: time="2024-12-13T01:27:06.146122154Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.150232 containerd[1465]: time="2024-12-13T01:27:06.149694557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.151193 containerd[1465]: time="2024-12-13T01:27:06.151147286Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.449831501s" Dec 13 01:27:06.151299 containerd[1465]: time="2024-12-13T01:27:06.151198950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:27:06.151912 containerd[1465]: time="2024-12-13T01:27:06.151877490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:27:07.335318 containerd[1465]: time="2024-12-13T01:27:07.335242168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:07.336939 containerd[1465]: time="2024-12-13T01:27:07.336868036Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18653983" Dec 13 01:27:07.338402 containerd[1465]: time="2024-12-13T01:27:07.338307173Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:07.342008 containerd[1465]: time="2024-12-13T01:27:07.341946217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:07.344396 containerd[1465]: time="2024-12-13T01:27:07.343408267Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.191482778s" Dec 13 01:27:07.344396 containerd[1465]: time="2024-12-13T01:27:07.343456190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:27:07.344895 containerd[1465]: time="2024-12-13T01:27:07.344854738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:27:07.897659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:27:07.909702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:08.302387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:08.312902 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:08.405213 kubelet[1934]: E1213 01:27:08.404523 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:08.409996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:08.410443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:08.874017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188352766.mount: Deactivated successfully. Dec 13 01:27:09.571528 containerd[1465]: time="2024-12-13T01:27:09.571460890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.572736 containerd[1465]: time="2024-12-13T01:27:09.572668308Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30232138" Dec 13 01:27:09.574084 containerd[1465]: time="2024-12-13T01:27:09.574018884Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.576843 containerd[1465]: time="2024-12-13T01:27:09.576802586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.578292 containerd[1465]: time="2024-12-13T01:27:09.577759249Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.232863773s" Dec 13 01:27:09.578292 containerd[1465]: time="2024-12-13T01:27:09.577808813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:27:09.578904 containerd[1465]: time="2024-12-13T01:27:09.578600973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:27:09.995963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615544166.mount: Deactivated successfully. Dec 13 01:27:11.103937 containerd[1465]: time="2024-12-13T01:27:11.103867297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:11.105940 containerd[1465]: time="2024-12-13T01:27:11.105400234Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Dec 13 01:27:11.109369 containerd[1465]: time="2024-12-13T01:27:11.108158833Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:11.115434 containerd[1465]: time="2024-12-13T01:27:11.115387511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:11.116884 containerd[1465]: time="2024-12-13T01:27:11.116842694Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.538197838s" Dec 13 01:27:11.117057 containerd[1465]: time="2024-12-13T01:27:11.117030298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:27:11.118034 containerd[1465]: time="2024-12-13T01:27:11.117920810Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:27:11.595903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681843574.mount: Deactivated successfully. Dec 13 01:27:11.603779 containerd[1465]: time="2024-12-13T01:27:11.603711859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:11.605206 containerd[1465]: time="2024-12-13T01:27:11.604937768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Dec 13 01:27:11.608216 containerd[1465]: time="2024-12-13T01:27:11.606516843Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:11.610698 containerd[1465]: time="2024-12-13T01:27:11.609454278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:11.610698 containerd[1465]: time="2024-12-13T01:27:11.610537083Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.525304ms" Dec 13 01:27:11.610698 containerd[1465]: time="2024-12-13T01:27:11.610578272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:27:11.611731 containerd[1465]: time="2024-12-13T01:27:11.611484529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:27:12.053401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472727209.mount: Deactivated successfully. Dec 13 01:27:14.204747 containerd[1465]: time="2024-12-13T01:27:14.204645855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:14.206509 containerd[1465]: time="2024-12-13T01:27:14.206437293Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Dec 13 01:27:14.207789 containerd[1465]: time="2024-12-13T01:27:14.207693561Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:14.213372 containerd[1465]: time="2024-12-13T01:27:14.211856169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:14.213667 containerd[1465]: time="2024-12-13T01:27:14.213627364Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.602101783s" Dec 13 01:27:14.213803 containerd[1465]: time="2024-12-13T01:27:14.213776613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:27:17.637227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:17.643788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:17.688507 systemd[1]: Reloading requested from client PID 2078 ('systemctl') (unit session-7.scope)... Dec 13 01:27:17.688529 systemd[1]: Reloading... Dec 13 01:27:17.871410 zram_generator::config[2122]: No configuration found. Dec 13 01:27:18.013406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:18.129888 systemd[1]: Reloading finished in 440 ms. Dec 13 01:27:18.198024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:18.208620 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:18.209820 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:18.210114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:18.215756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:18.801418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:18.822155 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:18.880218 kubelet[2173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:18.880218 kubelet[2173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:18.880218 kubelet[2173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:18.880828 kubelet[2173]: I1213 01:27:18.880283 2173 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:19.411383 kubelet[2173]: I1213 01:27:19.411305 2173 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:27:19.411383 kubelet[2173]: I1213 01:27:19.411362 2173 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:19.411768 kubelet[2173]: I1213 01:27:19.411729 2173 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:27:19.456105 kubelet[2173]: I1213 01:27:19.456067 2173 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:19.458225 kubelet[2173]: E1213 01:27:19.457195 2173 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:19.472288 kubelet[2173]: E1213 01:27:19.472232 2173 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:27:19.472288 kubelet[2173]: I1213 01:27:19.472272 2173 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:27:19.478095 kubelet[2173]: I1213 01:27:19.478062 2173 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:19.481001 kubelet[2173]: I1213 01:27:19.480945 2173 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:27:19.481283 kubelet[2173]: I1213 01:27:19.481226 2173 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:19.481557 kubelet[2173]: I1213 01:27:19.481275 2173 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:27:19.481753 kubelet[2173]: I1213 01:27:19.481560 2173 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:19.481753 kubelet[2173]: I1213 01:27:19.481580 2173 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:27:19.481850 kubelet[2173]: I1213 01:27:19.481754 2173 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:19.485837 kubelet[2173]: I1213 01:27:19.485790 2173 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:27:19.485949 kubelet[2173]: I1213 01:27:19.485841 2173 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:19.485949 kubelet[2173]: I1213 01:27:19.485893 2173 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:27:19.485949 kubelet[2173]: I1213 01:27:19.485917 2173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:19.499633 kubelet[2173]: W1213 01:27:19.499551 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:19.499801 kubelet[2173]: E1213 01:27:19.499638 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:19.499883 kubelet[2173]: W1213 01:27:19.499803 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:19.499883 kubelet[2173]: E1213 01:27:19.499861 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:19.501603 kubelet[2173]: I1213 01:27:19.501567 2173 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:19.504099 kubelet[2173]: I1213 01:27:19.504065 2173 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:19.505275 kubelet[2173]: W1213 01:27:19.505226 2173 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:19.506506 kubelet[2173]: I1213 01:27:19.506076 2173 server.go:1269] "Started kubelet" Dec 13 01:27:19.508948 kubelet[2173]: I1213 01:27:19.507600 2173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:19.516542 kubelet[2173]: E1213 01:27:19.512824 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal.1810983be25f93e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,UID:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 01:27:19.506047976 +0000 UTC m=+0.678170435,LastTimestamp:2024-12-13 01:27:19.506047976 +0000 UTC m=+0.678170435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,}" Dec 13 01:27:19.518073 kubelet[2173]: I1213 01:27:19.517175 2173 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:19.518073 kubelet[2173]: I1213 01:27:19.517585 2173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:19.518073 kubelet[2173]: I1213 01:27:19.518024 2173 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:19.518370 kubelet[2173]: I1213 01:27:19.518327 2173 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:27:19.519698 kubelet[2173]: I1213 01:27:19.519657 2173 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:27:19.522199 kubelet[2173]: E1213 01:27:19.521553 2173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" not found" Dec 13 01:27:19.522199 kubelet[2173]: I1213 01:27:19.521643 2173 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:27:19.522199 kubelet[2173]: I1213 01:27:19.522016 2173 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:27:19.522199 kubelet[2173]: I1213 01:27:19.522165 2173 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:27:19.522978 kubelet[2173]: W1213 01:27:19.522885 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:19.523069 kubelet[2173]: E1213 01:27:19.523025 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:19.524400 kubelet[2173]: I1213 01:27:19.523947 2173 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:19.524400 kubelet[2173]: I1213 01:27:19.524093 2173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:19.525226 kubelet[2173]: E1213 01:27:19.525171 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.34:6443: connect: connection refused" interval="200ms" Dec 13 01:27:19.525576 kubelet[2173]: E1213 01:27:19.525549 2173 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:19.528490 kubelet[2173]: I1213 01:27:19.528436 2173 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:19.559551 kubelet[2173]: I1213 01:27:19.559483 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:19.563469 kubelet[2173]: I1213 01:27:19.563334 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:19.563469 kubelet[2173]: I1213 01:27:19.563411 2173 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:19.563469 kubelet[2173]: I1213 01:27:19.563453 2173 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:27:19.563697 kubelet[2173]: E1213 01:27:19.563514 2173 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:19.569465 kubelet[2173]: I1213 01:27:19.569021 2173 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:19.569465 kubelet[2173]: I1213 01:27:19.569069 2173 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:19.569465 kubelet[2173]: I1213 01:27:19.569094 2173 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:19.572012 kubelet[2173]: W1213 01:27:19.571955 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:19.572146 kubelet[2173]: E1213 01:27:19.572026 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:19.572491 kubelet[2173]: I1213 01:27:19.572400 2173 policy_none.go:49] "None policy: Start" Dec 13 01:27:19.573486 kubelet[2173]: I1213 01:27:19.573444 2173 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:19.573486 kubelet[2173]: I1213 01:27:19.573477 2173 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:19.580332 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:27:19.596461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:27:19.601450 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:27:19.610723 kubelet[2173]: I1213 01:27:19.610408 2173 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:19.610723 kubelet[2173]: I1213 01:27:19.610691 2173 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:27:19.610723 kubelet[2173]: I1213 01:27:19.610709 2173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:27:19.611647 kubelet[2173]: I1213 01:27:19.611280 2173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:19.613502 kubelet[2173]: E1213 01:27:19.613469 2173 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" not found" Dec 13 01:27:19.688751 systemd[1]: Created slice kubepods-burstable-pod947dd070c5b562411c84b96894913869.slice - libcontainer container kubepods-burstable-pod947dd070c5b562411c84b96894913869.slice. Dec 13 01:27:19.717734 kubelet[2173]: I1213 01:27:19.717022 2173 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.717527 systemd[1]: Created slice kubepods-burstable-podab1c59efb10015889b9a7aa6174ab8be.slice - libcontainer container kubepods-burstable-podab1c59efb10015889b9a7aa6174ab8be.slice. Dec 13 01:27:19.718249 kubelet[2173]: E1213 01:27:19.717698 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.34:6443/api/v1/nodes\": dial tcp 10.128.0.34:6443: connect: connection refused" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.722874 kubelet[2173]: I1213 01:27:19.722440 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/947dd070c5b562411c84b96894913869-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"947dd070c5b562411c84b96894913869\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.722874 kubelet[2173]: I1213 01:27:19.722488 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/947dd070c5b562411c84b96894913869-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"947dd070c5b562411c84b96894913869\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.722874 kubelet[2173]: I1213 01:27:19.722520 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.722874 kubelet[2173]: I1213 01:27:19.722550 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.723167 kubelet[2173]: I1213 01:27:19.722578 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53749e309681e5a231f281ee0fe2048b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"53749e309681e5a231f281ee0fe2048b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.723167 kubelet[2173]: I1213 01:27:19.722608 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/947dd070c5b562411c84b96894913869-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"947dd070c5b562411c84b96894913869\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.723167 kubelet[2173]: I1213 01:27:19.722650 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.723167 kubelet[2173]: I1213 01:27:19.722696 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.723423 kubelet[2173]: I1213 01:27:19.722742 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.725311 systemd[1]: Created slice kubepods-burstable-pod53749e309681e5a231f281ee0fe2048b.slice - libcontainer container kubepods-burstable-pod53749e309681e5a231f281ee0fe2048b.slice. Dec 13 01:27:19.726950 kubelet[2173]: E1213 01:27:19.726905 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.34:6443: connect: connection refused" interval="400ms" Dec 13 01:27:19.926444 kubelet[2173]: I1213 01:27:19.926397 2173 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:19.927019 kubelet[2173]: E1213 01:27:19.926901 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.34:6443/api/v1/nodes\": dial tcp 10.128.0.34:6443: connect: connection refused" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:20.012378 containerd[1465]: time="2024-12-13T01:27:20.012196914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,Uid:947dd070c5b562411c84b96894913869,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:20.023483 containerd[1465]: time="2024-12-13T01:27:20.023421854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,Uid:ab1c59efb10015889b9a7aa6174ab8be,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:20.029748 containerd[1465]: time="2024-12-13T01:27:20.029505853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,Uid:53749e309681e5a231f281ee0fe2048b,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:20.128165 kubelet[2173]: E1213 01:27:20.128048 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.34:6443: connect: connection refused" interval="800ms" Dec 13 01:27:20.334118 kubelet[2173]: I1213 01:27:20.333595 2173 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:20.334118 kubelet[2173]: E1213 01:27:20.334004 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.34:6443/api/v1/nodes\": dial tcp 10.128.0.34:6443: connect: connection refused" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:20.340797 kubelet[2173]: W1213 01:27:20.340705 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:20.340797 kubelet[2173]: E1213 01:27:20.340794 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:20.407788 kubelet[2173]: W1213 01:27:20.407718 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:20.407788 kubelet[2173]: E1213 01:27:20.407794 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:20.451891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031955398.mount: Deactivated successfully. Dec 13 01:27:20.461219 containerd[1465]: time="2024-12-13T01:27:20.461137204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.462497 containerd[1465]: time="2024-12-13T01:27:20.462445030Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.463732 containerd[1465]: time="2024-12-13T01:27:20.463672587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:20.464777 containerd[1465]: time="2024-12-13T01:27:20.464718541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Dec 13 01:27:20.466102 containerd[1465]: time="2024-12-13T01:27:20.466043863Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.467713 containerd[1465]: time="2024-12-13T01:27:20.467665862Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.468422 containerd[1465]: time="2024-12-13T01:27:20.468326548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:20.471361 containerd[1465]: time="2024-12-13T01:27:20.470902778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.473936 containerd[1465]: time="2024-12-13T01:27:20.473890868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 450.372388ms" Dec 13 01:27:20.476722 containerd[1465]: time="2024-12-13T01:27:20.476671306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 447.068028ms" Dec 13 01:27:20.479264 containerd[1465]: time="2024-12-13T01:27:20.479209025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.915272ms" Dec 13 01:27:20.600434 kubelet[2173]: W1213 01:27:20.600144 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:20.600434 kubelet[2173]: E1213 01:27:20.600246 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:20.673151 containerd[1465]: time="2024-12-13T01:27:20.673032326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:20.675287 containerd[1465]: time="2024-12-13T01:27:20.674886924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:20.675287 containerd[1465]: time="2024-12-13T01:27:20.674958234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:20.676728 containerd[1465]: time="2024-12-13T01:27:20.676456625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:20.676728 containerd[1465]: time="2024-12-13T01:27:20.676616999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:20.676728 containerd[1465]: time="2024-12-13T01:27:20.676649519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:20.677544 containerd[1465]: time="2024-12-13T01:27:20.677489109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:20.678007 containerd[1465]: time="2024-12-13T01:27:20.677758001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:20.679931 containerd[1465]: time="2024-12-13T01:27:20.679705266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:20.682825 containerd[1465]: time="2024-12-13T01:27:20.681406896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:20.682825 containerd[1465]: time="2024-12-13T01:27:20.681450264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:20.682825 containerd[1465]: time="2024-12-13T01:27:20.681706056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:20.687069 kubelet[2173]: W1213 01:27:20.686900 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.34:6443: connect: connection refused Dec 13 01:27:20.687069 kubelet[2173]: E1213 01:27:20.687010 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:27:20.725352 systemd[1]: Started cri-containerd-7226b854bfae41585bbc276ec7a5dbbac9fe0cafc5cffd7664d3a9582da0eb83.scope - libcontainer container 7226b854bfae41585bbc276ec7a5dbbac9fe0cafc5cffd7664d3a9582da0eb83. Dec 13 01:27:20.736703 systemd[1]: Started cri-containerd-8286cf856ebd6bca85d1f62df6fa52a1a982a7f0767d206d32fa2e32b2e95e5b.scope - libcontainer container 8286cf856ebd6bca85d1f62df6fa52a1a982a7f0767d206d32fa2e32b2e95e5b. Dec 13 01:27:20.743309 systemd[1]: Started cri-containerd-620d14085a1f1e58e030a9c99611bdae586049d9ad8fb76d4e2cfeb53d57ab42.scope - libcontainer container 620d14085a1f1e58e030a9c99611bdae586049d9ad8fb76d4e2cfeb53d57ab42. Dec 13 01:27:20.820131 containerd[1465]: time="2024-12-13T01:27:20.818912429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,Uid:947dd070c5b562411c84b96894913869,Namespace:kube-system,Attempt:0,} returns sandbox id \"620d14085a1f1e58e030a9c99611bdae586049d9ad8fb76d4e2cfeb53d57ab42\"" Dec 13 01:27:20.827257 kubelet[2173]: E1213 01:27:20.826556 2173 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-21291" Dec 13 01:27:20.836028 containerd[1465]: time="2024-12-13T01:27:20.835980503Z" level=info msg="CreateContainer within sandbox \"620d14085a1f1e58e030a9c99611bdae586049d9ad8fb76d4e2cfeb53d57ab42\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:20.858218 containerd[1465]: time="2024-12-13T01:27:20.857929074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,Uid:ab1c59efb10015889b9a7aa6174ab8be,Namespace:kube-system,Attempt:0,} returns sandbox id \"7226b854bfae41585bbc276ec7a5dbbac9fe0cafc5cffd7664d3a9582da0eb83\"" Dec 13 01:27:20.865247 kubelet[2173]: E1213 01:27:20.863695 2173 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flat" Dec 13 01:27:20.868024 containerd[1465]: time="2024-12-13T01:27:20.867976315Z" level=info msg="CreateContainer within sandbox \"7226b854bfae41585bbc276ec7a5dbbac9fe0cafc5cffd7664d3a9582da0eb83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:20.868858 containerd[1465]: time="2024-12-13T01:27:20.868807243Z" level=info msg="CreateContainer within sandbox \"620d14085a1f1e58e030a9c99611bdae586049d9ad8fb76d4e2cfeb53d57ab42\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9cd9e5cd25873f813ad453b848aace407582b4d08a4ad89676ce1f71db92981f\"" Dec 13 01:27:20.870099 containerd[1465]: time="2024-12-13T01:27:20.870066767Z" level=info msg="StartContainer for \"9cd9e5cd25873f813ad453b848aace407582b4d08a4ad89676ce1f71db92981f\"" Dec 13 01:27:20.879129 containerd[1465]: time="2024-12-13T01:27:20.879089141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,Uid:53749e309681e5a231f281ee0fe2048b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8286cf856ebd6bca85d1f62df6fa52a1a982a7f0767d206d32fa2e32b2e95e5b\"" Dec 13 01:27:20.881902 kubelet[2173]: E1213 01:27:20.881833 2173 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-21291" Dec 13 01:27:20.885541 containerd[1465]: time="2024-12-13T01:27:20.885503302Z" level=info msg="CreateContainer within sandbox \"8286cf856ebd6bca85d1f62df6fa52a1a982a7f0767d206d32fa2e32b2e95e5b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:20.901002 containerd[1465]: time="2024-12-13T01:27:20.900928189Z" level=info msg="CreateContainer within sandbox \"7226b854bfae41585bbc276ec7a5dbbac9fe0cafc5cffd7664d3a9582da0eb83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e7e8077db05380d93152316c459e10686cff4c58688dbfb794772982c51a68d\"" Dec 13 01:27:20.902516 containerd[1465]: time="2024-12-13T01:27:20.902045996Z" level=info msg="StartContainer for \"3e7e8077db05380d93152316c459e10686cff4c58688dbfb794772982c51a68d\"" Dec 13 01:27:20.912519 containerd[1465]: time="2024-12-13T01:27:20.912428478Z" level=info msg="CreateContainer within sandbox \"8286cf856ebd6bca85d1f62df6fa52a1a982a7f0767d206d32fa2e32b2e95e5b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"def397549fdb47148d9ee4ebb674405ca65c4e976c47f835aed33494559a70b2\"" Dec 13 01:27:20.913383 containerd[1465]: time="2024-12-13T01:27:20.913327425Z" level=info msg="StartContainer for \"def397549fdb47148d9ee4ebb674405ca65c4e976c47f835aed33494559a70b2\"" Dec 13 01:27:20.929538 kubelet[2173]: E1213 01:27:20.929307 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.34:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:20.932607 systemd[1]: Started cri-containerd-9cd9e5cd25873f813ad453b848aace407582b4d08a4ad89676ce1f71db92981f.scope - libcontainer container 9cd9e5cd25873f813ad453b848aace407582b4d08a4ad89676ce1f71db92981f. Dec 13 01:27:20.977582 systemd[1]: Started cri-containerd-def397549fdb47148d9ee4ebb674405ca65c4e976c47f835aed33494559a70b2.scope - libcontainer container def397549fdb47148d9ee4ebb674405ca65c4e976c47f835aed33494559a70b2. Dec 13 01:27:21.000816 systemd[1]: Started cri-containerd-3e7e8077db05380d93152316c459e10686cff4c58688dbfb794772982c51a68d.scope - libcontainer container 3e7e8077db05380d93152316c459e10686cff4c58688dbfb794772982c51a68d. Dec 13 01:27:21.021830 kubelet[2173]: E1213 01:27:21.021621 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal.1810983be25f93e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,UID:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 01:27:19.506047976 +0000 UTC m=+0.678170435,LastTimestamp:2024-12-13 01:27:19.506047976 +0000 UTC m=+0.678170435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal,}" Dec 13 01:27:21.049370 containerd[1465]: time="2024-12-13T01:27:21.049235027Z" level=info msg="StartContainer for \"9cd9e5cd25873f813ad453b848aace407582b4d08a4ad89676ce1f71db92981f\" returns successfully" Dec 13 01:27:21.129753 containerd[1465]: time="2024-12-13T01:27:21.128689392Z" level=info msg="StartContainer for \"3e7e8077db05380d93152316c459e10686cff4c58688dbfb794772982c51a68d\" returns successfully" Dec 13 01:27:21.139255 containerd[1465]: time="2024-12-13T01:27:21.139189476Z" level=info msg="StartContainer for \"def397549fdb47148d9ee4ebb674405ca65c4e976c47f835aed33494559a70b2\" returns successfully" Dec 13 01:27:21.140617 kubelet[2173]: I1213 01:27:21.140582 2173 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:21.141052 kubelet[2173]: E1213 01:27:21.141004 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.34:6443/api/v1/nodes\": dial tcp 10.128.0.34:6443: connect: connection refused" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:22.750985 kubelet[2173]: I1213 01:27:22.750943 2173 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:24.323366 kubelet[2173]: E1213 01:27:24.323290 2173 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" not found" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:24.493732 kubelet[2173]: I1213 01:27:24.493682 2173 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:24.505145 kubelet[2173]: I1213 01:27:24.505101 2173 apiserver.go:52] "Watching apiserver" Dec 13 01:27:24.522787 kubelet[2173]: I1213 01:27:24.522729 2173 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:27:24.869379 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:27:25.249630 kubelet[2173]: W1213 01:27:25.248896 2173 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:27:26.284468 systemd[1]: Reloading requested from client PID 2448 ('systemctl') (unit session-7.scope)... Dec 13 01:27:26.284492 systemd[1]: Reloading... Dec 13 01:27:26.423422 zram_generator::config[2485]: No configuration found. Dec 13 01:27:26.577480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:26.704970 systemd[1]: Reloading finished in 419 ms. Dec 13 01:27:26.762622 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:26.781173 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:26.781612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:26.781803 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 118.5M memory peak, 0B memory swap peak. Dec 13 01:27:26.788782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:27.062240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:27.074139 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:27.141298 kubelet[2536]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:27.141298 kubelet[2536]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:27.141298 kubelet[2536]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:27.141891 kubelet[2536]: I1213 01:27:27.141422 2536 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:27.158118 kubelet[2536]: I1213 01:27:27.157210 2536 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:27:27.158118 kubelet[2536]: I1213 01:27:27.157243 2536 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:27.158118 kubelet[2536]: I1213 01:27:27.157569 2536 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:27:27.160063 kubelet[2536]: I1213 01:27:27.160037 2536 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:27.162629 kubelet[2536]: I1213 01:27:27.162592 2536 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:27.167864 kubelet[2536]: E1213 01:27:27.167814 2536 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:27:27.167864 kubelet[2536]: I1213 01:27:27.167852 2536 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:27:27.171466 kubelet[2536]: I1213 01:27:27.171393 2536 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:27.171612 kubelet[2536]: I1213 01:27:27.171571 2536 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:27:27.171828 kubelet[2536]: I1213 01:27:27.171761 2536 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:27.172058 kubelet[2536]: I1213 01:27:27.171812 2536 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:27:27.172227 kubelet[2536]: I1213 01:27:27.172059 2536 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:27.172227 kubelet[2536]: I1213 01:27:27.172077 2536 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:27:27.172227 kubelet[2536]: I1213 01:27:27.172146 2536 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:27.172438 kubelet[2536]: I1213 01:27:27.172321 2536 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:27:27.172438 kubelet[2536]: I1213 01:27:27.172361 2536 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:27.172438 kubelet[2536]: I1213 01:27:27.172400 2536 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:27:27.172438 kubelet[2536]: I1213 01:27:27.172416 2536 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:27.177367 kubelet[2536]: I1213 01:27:27.174217 2536 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:27.177367 kubelet[2536]: I1213 01:27:27.174874 2536 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:27.182172 kubelet[2536]: I1213 01:27:27.181076 2536 server.go:1269] "Started kubelet" Dec 13 01:27:27.186356 kubelet[2536]: I1213 01:27:27.184015 2536 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:27.193466 kubelet[2536]: I1213 01:27:27.193419 2536 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:27.195101 kubelet[2536]: I1213 01:27:27.195073 2536 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:27:27.198544 kubelet[2536]: I1213 01:27:27.196616 2536 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:27:27.199424 kubelet[2536]: I1213 01:27:27.199407 2536 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:27:27.199796 kubelet[2536]: E1213 01:27:27.199775 2536 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" not found" Dec 13 01:27:27.204679 kubelet[2536]: I1213 01:27:27.204655 2536 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:27:27.205005 kubelet[2536]: I1213 01:27:27.204991 2536 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:27:27.207297 kubelet[2536]: I1213 01:27:27.206293 2536 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:27.208089 kubelet[2536]: I1213 01:27:27.207736 2536 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:27.214810 kubelet[2536]: I1213 01:27:27.214604 2536 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:27.215541 kubelet[2536]: I1213 01:27:27.215254 2536 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:27.223980 kubelet[2536]: I1213 01:27:27.223937 2536 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:27.226116 kubelet[2536]: I1213 01:27:27.225431 2536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:27.229366 kubelet[2536]: I1213 01:27:27.228896 2536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:27.229366 kubelet[2536]: I1213 01:27:27.228927 2536 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:27.229366 kubelet[2536]: I1213 01:27:27.228950 2536 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:27:27.229366 kubelet[2536]: E1213 01:27:27.229009 2536 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:27.245424 kubelet[2536]: E1213 01:27:27.242334 2536 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:27.317141 kubelet[2536]: I1213 01:27:27.317018 2536 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:27.318439 kubelet[2536]: I1213 01:27:27.318408 2536 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:27.318706 kubelet[2536]: I1213 01:27:27.318691 2536 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:27.319027 kubelet[2536]: I1213 01:27:27.319010 2536 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:27:27.319155 kubelet[2536]: I1213 01:27:27.319124 2536 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:27:27.319248 kubelet[2536]: I1213 01:27:27.319238 2536 policy_none.go:49] "None policy: Start" Dec 13 01:27:27.321501 kubelet[2536]: I1213 01:27:27.321480 2536 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:27.321758 kubelet[2536]: I1213 01:27:27.321741 2536 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:27.322373 kubelet[2536]: I1213 01:27:27.322122 2536 state_mem.go:75] "Updated machine memory state" Dec 13 01:27:27.329279 kubelet[2536]: E1213 01:27:27.329258 2536 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:27.330820 kubelet[2536]: I1213 01:27:27.330791 2536 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:27.331049 kubelet[2536]: I1213 01:27:27.331029 2536 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:27:27.331117 kubelet[2536]: I1213 01:27:27.331053 2536 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:27:27.331566 kubelet[2536]: I1213 01:27:27.331542 2536 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:27.455160 kubelet[2536]: I1213 01:27:27.455123 2536 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.465807 kubelet[2536]: I1213 01:27:27.465721 2536 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.465974 kubelet[2536]: I1213 01:27:27.465829 2536 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.543028 kubelet[2536]: W1213 01:27:27.541803 2536 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:27:27.543028 kubelet[2536]: W1213 01:27:27.541901 2536 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:27:27.543028 kubelet[2536]: E1213 01:27:27.542153 2536 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.543028 kubelet[2536]: W1213 01:27:27.542304 2536 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:27:27.607733 kubelet[2536]: I1213 01:27:27.607446 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/947dd070c5b562411c84b96894913869-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"947dd070c5b562411c84b96894913869\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.607733 kubelet[2536]: I1213 01:27:27.607502 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/947dd070c5b562411c84b96894913869-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"947dd070c5b562411c84b96894913869\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.607733 kubelet[2536]: I1213 01:27:27.607542 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.607733 kubelet[2536]: I1213 01:27:27.607570 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.608034 kubelet[2536]: I1213 01:27:27.607605 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53749e309681e5a231f281ee0fe2048b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"53749e309681e5a231f281ee0fe2048b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.608034 kubelet[2536]: I1213 01:27:27.607632 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/947dd070c5b562411c84b96894913869-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"947dd070c5b562411c84b96894913869\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.608034 kubelet[2536]: I1213 01:27:27.607661 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.608034 kubelet[2536]: I1213 01:27:27.607686 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:27.608159 kubelet[2536]: I1213 01:27:27.607718 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab1c59efb10015889b9a7aa6174ab8be-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal\" (UID: \"ab1c59efb10015889b9a7aa6174ab8be\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:27:28.174713 kubelet[2536]: I1213 01:27:28.173421 2536 apiserver.go:52] "Watching apiserver" Dec 13 01:27:28.205914 kubelet[2536]: I1213 01:27:28.205806 2536 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:27:28.378039 kubelet[2536]: I1213 01:27:28.377812 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" podStartSLOduration=3.377785262 podStartE2EDuration="3.377785262s" podCreationTimestamp="2024-12-13 01:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:28.342127493 +0000 UTC m=+1.261062507" watchObservedRunningTime="2024-12-13 01:27:28.377785262 +0000 UTC m=+1.296720271" Dec 13 01:27:28.417679 kubelet[2536]: I1213 01:27:28.417588 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" podStartSLOduration=1.417562177 podStartE2EDuration="1.417562177s" podCreationTimestamp="2024-12-13 01:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:28.37848772 +0000 UTC m=+1.297422730" watchObservedRunningTime="2024-12-13 01:27:28.417562177 +0000 UTC m=+1.336497185" Dec 13 01:27:28.459942 kubelet[2536]: I1213 01:27:28.459776 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" podStartSLOduration=1.459752014 podStartE2EDuration="1.459752014s" podCreationTimestamp="2024-12-13 01:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:28.418306076 +0000 UTC m=+1.337241085" watchObservedRunningTime="2024-12-13 01:27:28.459752014 +0000 UTC m=+1.378687024" Dec 13 01:27:31.251700 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:31.294676 sshd[1707]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:31.300962 systemd[1]: sshd@6-10.128.0.34:22-147.75.109.163:44194.service: Deactivated successfully. Dec 13 01:27:31.303773 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:27:31.304033 systemd[1]: session-7.scope: Consumed 6.103s CPU time, 155.8M memory peak, 0B memory swap peak. Dec 13 01:27:31.305053 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:27:31.307296 systemd-logind[1452]: Removed session 7. Dec 13 01:27:33.023276 kubelet[2536]: I1213 01:27:33.023156 2536 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:27:33.024675 containerd[1465]: time="2024-12-13T01:27:33.024303228Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:27:33.026541 kubelet[2536]: I1213 01:27:33.024641 2536 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:27:33.906242 systemd[1]: Created slice kubepods-besteffort-podb89f9dda_ff0b_4c38_99bf_9bfe68e21e63.slice - libcontainer container kubepods-besteffort-podb89f9dda_ff0b_4c38_99bf_9bfe68e21e63.slice. Dec 13 01:27:34.051240 kubelet[2536]: I1213 01:27:34.051159 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkx9\" (UniqueName: \"kubernetes.io/projected/b89f9dda-ff0b-4c38-99bf-9bfe68e21e63-kube-api-access-6kkx9\") pod \"kube-proxy-h8lgm\" (UID: \"b89f9dda-ff0b-4c38-99bf-9bfe68e21e63\") " pod="kube-system/kube-proxy-h8lgm" Dec 13 01:27:34.051240 kubelet[2536]: I1213 01:27:34.051229 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b89f9dda-ff0b-4c38-99bf-9bfe68e21e63-kube-proxy\") pod \"kube-proxy-h8lgm\" (UID: \"b89f9dda-ff0b-4c38-99bf-9bfe68e21e63\") " pod="kube-system/kube-proxy-h8lgm" Dec 13 01:27:34.051880 kubelet[2536]: I1213 01:27:34.051258 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b89f9dda-ff0b-4c38-99bf-9bfe68e21e63-xtables-lock\") pod \"kube-proxy-h8lgm\" (UID: \"b89f9dda-ff0b-4c38-99bf-9bfe68e21e63\") " pod="kube-system/kube-proxy-h8lgm" Dec 13 01:27:34.051880 kubelet[2536]: I1213 01:27:34.051281 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b89f9dda-ff0b-4c38-99bf-9bfe68e21e63-lib-modules\") pod \"kube-proxy-h8lgm\" (UID: \"b89f9dda-ff0b-4c38-99bf-9bfe68e21e63\") " pod="kube-system/kube-proxy-h8lgm" Dec 13 01:27:34.136912 systemd[1]: Created slice kubepods-besteffort-pod1405700f_36cd_4486_8649_ca397478c907.slice - libcontainer container kubepods-besteffort-pod1405700f_36cd_4486_8649_ca397478c907.slice. Dec 13 01:27:34.151583 kubelet[2536]: I1213 01:27:34.151536 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1405700f-36cd-4486-8649-ca397478c907-var-lib-calico\") pod \"tigera-operator-76c4976dd7-zlbzh\" (UID: \"1405700f-36cd-4486-8649-ca397478c907\") " pod="tigera-operator/tigera-operator-76c4976dd7-zlbzh" Dec 13 01:27:34.151783 kubelet[2536]: I1213 01:27:34.151591 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrm78\" (UniqueName: \"kubernetes.io/projected/1405700f-36cd-4486-8649-ca397478c907-kube-api-access-wrm78\") pod \"tigera-operator-76c4976dd7-zlbzh\" (UID: \"1405700f-36cd-4486-8649-ca397478c907\") " pod="tigera-operator/tigera-operator-76c4976dd7-zlbzh" Dec 13 01:27:34.220882 containerd[1465]: time="2024-12-13T01:27:34.220682322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h8lgm,Uid:b89f9dda-ff0b-4c38-99bf-9bfe68e21e63,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:34.258921 containerd[1465]: time="2024-12-13T01:27:34.258289456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:34.258921 containerd[1465]: time="2024-12-13T01:27:34.258398973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:34.258921 containerd[1465]: time="2024-12-13T01:27:34.258597227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:34.258921 containerd[1465]: time="2024-12-13T01:27:34.258721892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:34.305594 systemd[1]: Started cri-containerd-e12e581f3d71197fc53af586e45ac8cd2d264c12eb831505fc38588378d98413.scope - libcontainer container e12e581f3d71197fc53af586e45ac8cd2d264c12eb831505fc38588378d98413. Dec 13 01:27:34.345560 containerd[1465]: time="2024-12-13T01:27:34.345417184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h8lgm,Uid:b89f9dda-ff0b-4c38-99bf-9bfe68e21e63,Namespace:kube-system,Attempt:0,} returns sandbox id \"e12e581f3d71197fc53af586e45ac8cd2d264c12eb831505fc38588378d98413\"" Dec 13 01:27:34.350997 containerd[1465]: time="2024-12-13T01:27:34.350840630Z" level=info msg="CreateContainer within sandbox \"e12e581f3d71197fc53af586e45ac8cd2d264c12eb831505fc38588378d98413\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:27:34.369668 containerd[1465]: time="2024-12-13T01:27:34.369608289Z" level=info msg="CreateContainer within sandbox \"e12e581f3d71197fc53af586e45ac8cd2d264c12eb831505fc38588378d98413\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99e086f8ed3618c602a0c7519280e7d33041f2917c5789e4047fe3c39e5f0362\"" Dec 13 01:27:34.370598 containerd[1465]: time="2024-12-13T01:27:34.370552243Z" level=info msg="StartContainer for \"99e086f8ed3618c602a0c7519280e7d33041f2917c5789e4047fe3c39e5f0362\"" Dec 13 01:27:34.409611 systemd[1]: Started cri-containerd-99e086f8ed3618c602a0c7519280e7d33041f2917c5789e4047fe3c39e5f0362.scope - libcontainer container 99e086f8ed3618c602a0c7519280e7d33041f2917c5789e4047fe3c39e5f0362. Dec 13 01:27:34.442050 containerd[1465]: time="2024-12-13T01:27:34.442005510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-zlbzh,Uid:1405700f-36cd-4486-8649-ca397478c907,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:27:34.457707 containerd[1465]: time="2024-12-13T01:27:34.457505171Z" level=info msg="StartContainer for \"99e086f8ed3618c602a0c7519280e7d33041f2917c5789e4047fe3c39e5f0362\" returns successfully" Dec 13 01:27:34.505008 containerd[1465]: time="2024-12-13T01:27:34.504496235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:34.508175 containerd[1465]: time="2024-12-13T01:27:34.506325717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:34.508175 containerd[1465]: time="2024-12-13T01:27:34.506378119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:34.508175 containerd[1465]: time="2024-12-13T01:27:34.506497881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:34.545584 systemd[1]: Started cri-containerd-e1730f3c5be216e4a10fe33c7d7884b22c41e0f32169b0c5304bf8dfba26adfe.scope - libcontainer container e1730f3c5be216e4a10fe33c7d7884b22c41e0f32169b0c5304bf8dfba26adfe. Dec 13 01:27:34.623503 containerd[1465]: time="2024-12-13T01:27:34.623427628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-zlbzh,Uid:1405700f-36cd-4486-8649-ca397478c907,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e1730f3c5be216e4a10fe33c7d7884b22c41e0f32169b0c5304bf8dfba26adfe\"" Dec 13 01:27:34.628839 containerd[1465]: time="2024-12-13T01:27:34.627664775Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:27:35.300399 kubelet[2536]: I1213 01:27:35.300158 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h8lgm" podStartSLOduration=2.300134488 podStartE2EDuration="2.300134488s" podCreationTimestamp="2024-12-13 01:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:35.299928232 +0000 UTC m=+8.218863237" watchObservedRunningTime="2024-12-13 01:27:35.300134488 +0000 UTC m=+8.219069499" Dec 13 01:27:36.155102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966719865.mount: Deactivated successfully. Dec 13 01:27:36.880751 containerd[1465]: time="2024-12-13T01:27:36.880684727Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:36.882318 containerd[1465]: time="2024-12-13T01:27:36.882047844Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763697" Dec 13 01:27:36.885391 containerd[1465]: time="2024-12-13T01:27:36.883760755Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:36.887663 containerd[1465]: time="2024-12-13T01:27:36.887622719Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:36.888813 containerd[1465]: time="2024-12-13T01:27:36.888767814Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.259724312s" Dec 13 01:27:36.888935 containerd[1465]: time="2024-12-13T01:27:36.888816961Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:27:36.893198 containerd[1465]: time="2024-12-13T01:27:36.893148814Z" level=info msg="CreateContainer within sandbox \"e1730f3c5be216e4a10fe33c7d7884b22c41e0f32169b0c5304bf8dfba26adfe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:27:36.920704 containerd[1465]: time="2024-12-13T01:27:36.920634760Z" level=info msg="CreateContainer within sandbox \"e1730f3c5be216e4a10fe33c7d7884b22c41e0f32169b0c5304bf8dfba26adfe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"13d175043ad95870c51b42c8b69e5308eb1f123a79f9cf6df307350d65c0c471\"" Dec 13 01:27:36.922319 containerd[1465]: time="2024-12-13T01:27:36.922000882Z" level=info msg="StartContainer for \"13d175043ad95870c51b42c8b69e5308eb1f123a79f9cf6df307350d65c0c471\"" Dec 13 01:27:36.962522 systemd[1]: run-containerd-runc-k8s.io-13d175043ad95870c51b42c8b69e5308eb1f123a79f9cf6df307350d65c0c471-runc.vWiOPZ.mount: Deactivated successfully. Dec 13 01:27:36.971585 systemd[1]: Started cri-containerd-13d175043ad95870c51b42c8b69e5308eb1f123a79f9cf6df307350d65c0c471.scope - libcontainer container 13d175043ad95870c51b42c8b69e5308eb1f123a79f9cf6df307350d65c0c471. Dec 13 01:27:37.007799 containerd[1465]: time="2024-12-13T01:27:37.007605111Z" level=info msg="StartContainer for \"13d175043ad95870c51b42c8b69e5308eb1f123a79f9cf6df307350d65c0c471\" returns successfully" Dec 13 01:27:37.305086 kubelet[2536]: I1213 01:27:37.304987 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-zlbzh" podStartSLOduration=1.040176305 podStartE2EDuration="3.304959452s" podCreationTimestamp="2024-12-13 01:27:34 +0000 UTC" firstStartedPulling="2024-12-13 01:27:34.625866367 +0000 UTC m=+7.544801362" lastFinishedPulling="2024-12-13 01:27:36.890649523 +0000 UTC m=+9.809584509" observedRunningTime="2024-12-13 01:27:37.304235167 +0000 UTC m=+10.223170175" watchObservedRunningTime="2024-12-13 01:27:37.304959452 +0000 UTC m=+10.223894461" Dec 13 01:27:38.959895 update_engine[1455]: I20241213 01:27:38.959815 1455 update_attempter.cc:509] Updating boot flags... Dec 13 01:27:39.029397 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2920) Dec 13 01:27:39.155631 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2922) Dec 13 01:27:39.322442 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2922) Dec 13 01:27:40.444673 systemd[1]: Created slice kubepods-besteffort-podcecd3461_e682_4333_915e_d1bd997ee129.slice - libcontainer container kubepods-besteffort-podcecd3461_e682_4333_915e_d1bd997ee129.slice. Dec 13 01:27:40.577509 systemd[1]: Created slice kubepods-besteffort-pod925c8b29_3822_4079_81b9_6f728a1e3a50.slice - libcontainer container kubepods-besteffort-pod925c8b29_3822_4079_81b9_6f728a1e3a50.slice. Dec 13 01:27:40.592828 kubelet[2536]: I1213 01:27:40.592774 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-var-lib-calico\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.593483 kubelet[2536]: I1213 01:27:40.592871 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cecd3461-e682-4333-915e-d1bd997ee129-typha-certs\") pod \"calico-typha-6bfc6f65d7-q267v\" (UID: \"cecd3461-e682-4333-915e-d1bd997ee129\") " pod="calico-system/calico-typha-6bfc6f65d7-q267v" Dec 13 01:27:40.593483 kubelet[2536]: I1213 01:27:40.592900 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4hh\" (UniqueName: \"kubernetes.io/projected/cecd3461-e682-4333-915e-d1bd997ee129-kube-api-access-dt4hh\") pod \"calico-typha-6bfc6f65d7-q267v\" (UID: \"cecd3461-e682-4333-915e-d1bd997ee129\") " pod="calico-system/calico-typha-6bfc6f65d7-q267v" Dec 13 01:27:40.593483 kubelet[2536]: I1213 01:27:40.592982 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cecd3461-e682-4333-915e-d1bd997ee129-tigera-ca-bundle\") pod \"calico-typha-6bfc6f65d7-q267v\" (UID: \"cecd3461-e682-4333-915e-d1bd997ee129\") " pod="calico-system/calico-typha-6bfc6f65d7-q267v" Dec 13 01:27:40.593483 kubelet[2536]: I1213 01:27:40.593049 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-var-run-calico\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.593483 kubelet[2536]: I1213 01:27:40.593075 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfjkf\" (UniqueName: \"kubernetes.io/projected/925c8b29-3822-4079-81b9-6f728a1e3a50-kube-api-access-sfjkf\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.593804 kubelet[2536]: I1213 01:27:40.593147 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-flexvol-driver-host\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.593804 kubelet[2536]: I1213 01:27:40.593207 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-xtables-lock\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.593804 kubelet[2536]: I1213 01:27:40.593277 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-policysync\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.593804 kubelet[2536]: I1213 01:27:40.593305 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/925c8b29-3822-4079-81b9-6f728a1e3a50-node-certs\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.593804 kubelet[2536]: I1213 01:27:40.593374 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-cni-bin-dir\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.594049 kubelet[2536]: I1213 01:27:40.593445 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925c8b29-3822-4079-81b9-6f728a1e3a50-tigera-ca-bundle\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.594049 kubelet[2536]: I1213 01:27:40.593476 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-lib-modules\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.594049 kubelet[2536]: I1213 01:27:40.593540 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-cni-net-dir\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.594049 kubelet[2536]: I1213 01:27:40.593609 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/925c8b29-3822-4079-81b9-6f728a1e3a50-cni-log-dir\") pod \"calico-node-l87t5\" (UID: \"925c8b29-3822-4079-81b9-6f728a1e3a50\") " pod="calico-system/calico-node-l87t5" Dec 13 01:27:40.684945 kubelet[2536]: E1213 01:27:40.684860 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:40.695472 kubelet[2536]: I1213 01:27:40.694038 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a545ca75-b4b0-41f8-ba2f-947389823539-varrun\") pod \"csi-node-driver-bb5cr\" (UID: \"a545ca75-b4b0-41f8-ba2f-947389823539\") " pod="calico-system/csi-node-driver-bb5cr" Dec 13 01:27:40.695472 kubelet[2536]: I1213 01:27:40.694142 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a545ca75-b4b0-41f8-ba2f-947389823539-registration-dir\") pod \"csi-node-driver-bb5cr\" (UID: \"a545ca75-b4b0-41f8-ba2f-947389823539\") " pod="calico-system/csi-node-driver-bb5cr" Dec 13 01:27:40.695472 kubelet[2536]: I1213 01:27:40.694217 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dr9x\" (UniqueName: \"kubernetes.io/projected/a545ca75-b4b0-41f8-ba2f-947389823539-kube-api-access-4dr9x\") pod \"csi-node-driver-bb5cr\" (UID: \"a545ca75-b4b0-41f8-ba2f-947389823539\") " pod="calico-system/csi-node-driver-bb5cr" Dec 13 01:27:40.695472 kubelet[2536]: I1213 01:27:40.694282 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a545ca75-b4b0-41f8-ba2f-947389823539-kubelet-dir\") pod \"csi-node-driver-bb5cr\" (UID: \"a545ca75-b4b0-41f8-ba2f-947389823539\") " pod="calico-system/csi-node-driver-bb5cr" Dec 13 01:27:40.695472 kubelet[2536]: I1213 01:27:40.694373 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a545ca75-b4b0-41f8-ba2f-947389823539-socket-dir\") pod \"csi-node-driver-bb5cr\" (UID: \"a545ca75-b4b0-41f8-ba2f-947389823539\") " pod="calico-system/csi-node-driver-bb5cr" Dec 13 01:27:40.699720 kubelet[2536]: E1213 01:27:40.699615 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.699720 kubelet[2536]: W1213 01:27:40.699662 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.699720 kubelet[2536]: E1213 01:27:40.699686 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.701551 kubelet[2536]: E1213 01:27:40.700652 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.701551 kubelet[2536]: W1213 01:27:40.701419 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.701551 kubelet[2536]: E1213 01:27:40.701477 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.703805 kubelet[2536]: E1213 01:27:40.702190 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.703805 kubelet[2536]: W1213 01:27:40.702210 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.703805 kubelet[2536]: E1213 01:27:40.702264 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.704562 kubelet[2536]: E1213 01:27:40.704332 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.704562 kubelet[2536]: W1213 01:27:40.704399 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.705063 kubelet[2536]: E1213 01:27:40.704985 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.705063 kubelet[2536]: W1213 01:27:40.705003 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.708640 kubelet[2536]: E1213 01:27:40.708448 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.708640 kubelet[2536]: E1213 01:27:40.708504 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.708640 kubelet[2536]: E1213 01:27:40.708598 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.708640 kubelet[2536]: W1213 01:27:40.708611 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.713141 kubelet[2536]: E1213 01:27:40.712927 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.713141 kubelet[2536]: W1213 01:27:40.712947 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.713141 kubelet[2536]: E1213 01:27:40.713054 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.713141 kubelet[2536]: E1213 01:27:40.713080 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.718788 kubelet[2536]: E1213 01:27:40.718610 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.718788 kubelet[2536]: W1213 01:27:40.718630 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.718788 kubelet[2536]: E1213 01:27:40.718730 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.720792 kubelet[2536]: E1213 01:27:40.720615 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.720792 kubelet[2536]: W1213 01:27:40.720635 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.721392 kubelet[2536]: E1213 01:27:40.721371 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.721670 kubelet[2536]: W1213 01:27:40.721519 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.721670 kubelet[2536]: E1213 01:27:40.721463 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.721670 kubelet[2536]: E1213 01:27:40.721614 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.722295 kubelet[2536]: E1213 01:27:40.722120 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.722295 kubelet[2536]: W1213 01:27:40.722138 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.725256 kubelet[2536]: E1213 01:27:40.725115 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.725256 kubelet[2536]: W1213 01:27:40.725135 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.725256 kubelet[2536]: E1213 01:27:40.725147 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.725256 kubelet[2536]: E1213 01:27:40.725172 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.725983 kubelet[2536]: E1213 01:27:40.725963 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.726236 kubelet[2536]: W1213 01:27:40.726085 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.726236 kubelet[2536]: E1213 01:27:40.726135 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.727085 kubelet[2536]: E1213 01:27:40.726942 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.727085 kubelet[2536]: W1213 01:27:40.726961 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.727085 kubelet[2536]: E1213 01:27:40.727009 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.727678 kubelet[2536]: E1213 01:27:40.727658 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.727900 kubelet[2536]: W1213 01:27:40.727783 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.728169 kubelet[2536]: E1213 01:27:40.728154 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.728378 kubelet[2536]: W1213 01:27:40.728260 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.728581 kubelet[2536]: E1213 01:27:40.728552 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.728671 kubelet[2536]: E1213 01:27:40.728599 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.730422 kubelet[2536]: E1213 01:27:40.730384 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.730909 kubelet[2536]: W1213 01:27:40.730707 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.732480 kubelet[2536]: E1213 01:27:40.731601 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.732480 kubelet[2536]: W1213 01:27:40.731674 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.733013 kubelet[2536]: E1213 01:27:40.732878 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.733013 kubelet[2536]: W1213 01:27:40.732898 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.735069 kubelet[2536]: E1213 01:27:40.734933 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.735069 kubelet[2536]: W1213 01:27:40.734954 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.738135 kubelet[2536]: E1213 01:27:40.737582 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.738135 kubelet[2536]: W1213 01:27:40.737602 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.739604 kubelet[2536]: E1213 01:27:40.739408 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.739604 kubelet[2536]: W1213 01:27:40.739432 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.739994 kubelet[2536]: E1213 01:27:40.739964 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.740276 kubelet[2536]: W1213 01:27:40.740102 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.740276 kubelet[2536]: E1213 01:27:40.740130 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.740276 kubelet[2536]: E1213 01:27:40.740152 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.740844 kubelet[2536]: E1213 01:27:40.740813 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.741014 kubelet[2536]: W1213 01:27:40.740943 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.741014 kubelet[2536]: E1213 01:27:40.740968 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.741323 kubelet[2536]: E1213 01:27:40.741197 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.741323 kubelet[2536]: E1213 01:27:40.741221 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.741323 kubelet[2536]: E1213 01:27:40.741235 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.741323 kubelet[2536]: E1213 01:27:40.741254 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.741323 kubelet[2536]: E1213 01:27:40.741271 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.742126 kubelet[2536]: E1213 01:27:40.741928 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.742126 kubelet[2536]: W1213 01:27:40.741966 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.742126 kubelet[2536]: E1213 01:27:40.741985 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.747281 kubelet[2536]: E1213 01:27:40.747056 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.747281 kubelet[2536]: W1213 01:27:40.747077 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.747281 kubelet[2536]: E1213 01:27:40.747112 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.749640 kubelet[2536]: E1213 01:27:40.749619 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.755373 kubelet[2536]: W1213 01:27:40.755290 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.755477 kubelet[2536]: E1213 01:27:40.755375 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.756372 kubelet[2536]: E1213 01:27:40.755996 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.756372 kubelet[2536]: W1213 01:27:40.756020 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.756594 kubelet[2536]: E1213 01:27:40.756506 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.756594 kubelet[2536]: W1213 01:27:40.756521 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.756954 kubelet[2536]: E1213 01:27:40.756926 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.757054 kubelet[2536]: E1213 01:27:40.756976 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.757693 kubelet[2536]: E1213 01:27:40.757623 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.758388 kubelet[2536]: W1213 01:27:40.758189 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.759253 kubelet[2536]: E1213 01:27:40.759172 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.762111 kubelet[2536]: E1213 01:27:40.761988 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.762111 kubelet[2536]: W1213 01:27:40.762012 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.762746 kubelet[2536]: E1213 01:27:40.762639 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.763309 kubelet[2536]: E1213 01:27:40.763194 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.763309 kubelet[2536]: W1213 01:27:40.763218 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.765458 kubelet[2536]: E1213 01:27:40.763583 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.765458 kubelet[2536]: E1213 01:27:40.765075 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.765458 kubelet[2536]: W1213 01:27:40.765094 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.767266 kubelet[2536]: E1213 01:27:40.765712 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.772371 kubelet[2536]: E1213 01:27:40.768975 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.772371 kubelet[2536]: W1213 01:27:40.768995 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.772371 kubelet[2536]: E1213 01:27:40.769014 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.773098 kubelet[2536]: E1213 01:27:40.772968 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.773098 kubelet[2536]: W1213 01:27:40.772987 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.773098 kubelet[2536]: E1213 01:27:40.773011 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.774084 kubelet[2536]: E1213 01:27:40.774058 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.774084 kubelet[2536]: W1213 01:27:40.774081 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.774240 kubelet[2536]: E1213 01:27:40.774100 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.787648 kubelet[2536]: E1213 01:27:40.787620 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.787648 kubelet[2536]: W1213 01:27:40.787646 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.787856 kubelet[2536]: E1213 01:27:40.787671 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.798119 kubelet[2536]: E1213 01:27:40.797332 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.798119 kubelet[2536]: W1213 01:27:40.797377 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.798119 kubelet[2536]: E1213 01:27:40.797673 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.798896 kubelet[2536]: E1213 01:27:40.798761 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.798896 kubelet[2536]: W1213 01:27:40.798782 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.799061 kubelet[2536]: E1213 01:27:40.799006 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.800017 kubelet[2536]: E1213 01:27:40.799964 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.800017 kubelet[2536]: W1213 01:27:40.799985 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.800017 kubelet[2536]: E1213 01:27:40.800014 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.800865 kubelet[2536]: E1213 01:27:40.800841 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.800865 kubelet[2536]: W1213 01:27:40.800864 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.801372 kubelet[2536]: E1213 01:27:40.801231 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.801733 kubelet[2536]: E1213 01:27:40.801682 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.801733 kubelet[2536]: W1213 01:27:40.801732 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.802314 kubelet[2536]: E1213 01:27:40.801828 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.802314 kubelet[2536]: E1213 01:27:40.802308 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.802485 kubelet[2536]: W1213 01:27:40.802433 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.802591 kubelet[2536]: E1213 01:27:40.802567 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.803066 kubelet[2536]: E1213 01:27:40.803036 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.803066 kubelet[2536]: W1213 01:27:40.803062 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.803273 kubelet[2536]: E1213 01:27:40.803230 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.803628 kubelet[2536]: E1213 01:27:40.803606 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.803628 kubelet[2536]: W1213 01:27:40.803626 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.803793 kubelet[2536]: E1213 01:27:40.803722 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.805386 kubelet[2536]: E1213 01:27:40.804183 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.805386 kubelet[2536]: W1213 01:27:40.804201 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.805386 kubelet[2536]: E1213 01:27:40.805118 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.805630 kubelet[2536]: E1213 01:27:40.805506 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.805630 kubelet[2536]: W1213 01:27:40.805520 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.805751 kubelet[2536]: E1213 01:27:40.805675 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.806269 kubelet[2536]: E1213 01:27:40.806241 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.806269 kubelet[2536]: W1213 01:27:40.806265 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.806445 kubelet[2536]: E1213 01:27:40.806411 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.806962 kubelet[2536]: E1213 01:27:40.806937 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.806962 kubelet[2536]: W1213 01:27:40.806958 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.807385 kubelet[2536]: E1213 01:27:40.807357 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.807922 kubelet[2536]: E1213 01:27:40.807895 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.807922 kubelet[2536]: W1213 01:27:40.807918 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.808573 kubelet[2536]: E1213 01:27:40.808543 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.808850 kubelet[2536]: E1213 01:27:40.808827 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.808850 kubelet[2536]: W1213 01:27:40.808848 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.809187 kubelet[2536]: E1213 01:27:40.809158 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.810139 kubelet[2536]: E1213 01:27:40.810109 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.810139 kubelet[2536]: W1213 01:27:40.810135 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.810405 kubelet[2536]: E1213 01:27:40.810378 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.810941 kubelet[2536]: E1213 01:27:40.810914 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.810941 kubelet[2536]: W1213 01:27:40.810936 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.811448 kubelet[2536]: E1213 01:27:40.811418 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.812151 kubelet[2536]: E1213 01:27:40.812122 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.812151 kubelet[2536]: W1213 01:27:40.812145 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.812658 kubelet[2536]: E1213 01:27:40.812616 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.813383 kubelet[2536]: E1213 01:27:40.813355 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.813383 kubelet[2536]: W1213 01:27:40.813374 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.813887 kubelet[2536]: E1213 01:27:40.813859 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.814746 kubelet[2536]: E1213 01:27:40.814718 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.814746 kubelet[2536]: W1213 01:27:40.814743 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.815122 kubelet[2536]: E1213 01:27:40.815094 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.818530 kubelet[2536]: E1213 01:27:40.818441 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.818530 kubelet[2536]: W1213 01:27:40.818465 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.818698 kubelet[2536]: E1213 01:27:40.818631 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.819127 kubelet[2536]: E1213 01:27:40.818850 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.819127 kubelet[2536]: W1213 01:27:40.818869 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.819127 kubelet[2536]: E1213 01:27:40.819022 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.819659 kubelet[2536]: E1213 01:27:40.819392 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.819659 kubelet[2536]: W1213 01:27:40.819407 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.819659 kubelet[2536]: E1213 01:27:40.819522 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.819868 kubelet[2536]: E1213 01:27:40.819773 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.819868 kubelet[2536]: W1213 01:27:40.819808 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.819868 kubelet[2536]: E1213 01:27:40.819834 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.820711 kubelet[2536]: E1213 01:27:40.820231 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.820711 kubelet[2536]: W1213 01:27:40.820250 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.820711 kubelet[2536]: E1213 01:27:40.820284 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.820711 kubelet[2536]: E1213 01:27:40.820676 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.820711 kubelet[2536]: W1213 01:27:40.820691 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.820711 kubelet[2536]: E1213 01:27:40.820707 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.840386 kubelet[2536]: E1213 01:27:40.838802 2536 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.840386 kubelet[2536]: W1213 01:27:40.838826 2536 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.840386 kubelet[2536]: E1213 01:27:40.838847 2536 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.886003 containerd[1465]: time="2024-12-13T01:27:40.885933305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l87t5,Uid:925c8b29-3822-4079-81b9-6f728a1e3a50,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:40.934632 containerd[1465]: time="2024-12-13T01:27:40.934233308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:40.934632 containerd[1465]: time="2024-12-13T01:27:40.934378142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:40.934632 containerd[1465]: time="2024-12-13T01:27:40.934428763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:40.935376 containerd[1465]: time="2024-12-13T01:27:40.935235342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:40.985629 systemd[1]: Started cri-containerd-c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06.scope - libcontainer container c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06. Dec 13 01:27:41.066261 containerd[1465]: time="2024-12-13T01:27:41.065778728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bfc6f65d7-q267v,Uid:cecd3461-e682-4333-915e-d1bd997ee129,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:41.081542 containerd[1465]: time="2024-12-13T01:27:41.081492627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l87t5,Uid:925c8b29-3822-4079-81b9-6f728a1e3a50,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06\"" Dec 13 01:27:41.084554 containerd[1465]: time="2024-12-13T01:27:41.084512540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:27:41.115153 containerd[1465]: time="2024-12-13T01:27:41.114618611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:41.115153 containerd[1465]: time="2024-12-13T01:27:41.114694756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:41.115153 containerd[1465]: time="2024-12-13T01:27:41.114721574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.115572 containerd[1465]: time="2024-12-13T01:27:41.115286371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.150618 systemd[1]: Started cri-containerd-4dee5512211ccdc078c2f2e9d38e71f0af5640fb783fe0e36a3d59f3a2ddd5a2.scope - libcontainer container 4dee5512211ccdc078c2f2e9d38e71f0af5640fb783fe0e36a3d59f3a2ddd5a2. Dec 13 01:27:41.242116 containerd[1465]: time="2024-12-13T01:27:41.241042486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bfc6f65d7-q267v,Uid:cecd3461-e682-4333-915e-d1bd997ee129,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dee5512211ccdc078c2f2e9d38e71f0af5640fb783fe0e36a3d59f3a2ddd5a2\"" Dec 13 01:27:42.111212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3074526979.mount: Deactivated successfully. Dec 13 01:27:42.230145 kubelet[2536]: E1213 01:27:42.229871 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:42.278841 containerd[1465]: time="2024-12-13T01:27:42.278778119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:42.280211 containerd[1465]: time="2024-12-13T01:27:42.280014930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 01:27:42.283386 containerd[1465]: time="2024-12-13T01:27:42.281758472Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:42.285150 containerd[1465]: time="2024-12-13T01:27:42.285105002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:42.286300 containerd[1465]: time="2024-12-13T01:27:42.286252276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.201521043s" Dec 13 01:27:42.286514 containerd[1465]: time="2024-12-13T01:27:42.286482589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:27:42.287927 containerd[1465]: time="2024-12-13T01:27:42.287898480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:27:42.290524 containerd[1465]: time="2024-12-13T01:27:42.290436765Z" level=info msg="CreateContainer within sandbox \"c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:27:42.314742 containerd[1465]: time="2024-12-13T01:27:42.314683046Z" level=info msg="CreateContainer within sandbox \"c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620\"" Dec 13 01:27:42.316527 containerd[1465]: time="2024-12-13T01:27:42.316476253Z" level=info msg="StartContainer for \"acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620\"" Dec 13 01:27:42.371788 systemd[1]: Started cri-containerd-acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620.scope - libcontainer container acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620. Dec 13 01:27:42.423961 containerd[1465]: time="2024-12-13T01:27:42.423726328Z" level=info msg="StartContainer for \"acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620\" returns successfully" Dec 13 01:27:42.451028 systemd[1]: cri-containerd-acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620.scope: Deactivated successfully. Dec 13 01:27:42.712673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620-rootfs.mount: Deactivated successfully. Dec 13 01:27:42.805322 containerd[1465]: time="2024-12-13T01:27:42.804998429Z" level=info msg="shim disconnected" id=acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620 namespace=k8s.io Dec 13 01:27:42.805322 containerd[1465]: time="2024-12-13T01:27:42.805073715Z" level=warning msg="cleaning up after shim disconnected" id=acea71bf7c7d85ca894650ea7b0ead01f95cf19b53e3bec98c5100fe80897620 namespace=k8s.io Dec 13 01:27:42.805322 containerd[1465]: time="2024-12-13T01:27:42.805088328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:44.229816 kubelet[2536]: E1213 01:27:44.229752 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:45.572899 containerd[1465]: time="2024-12-13T01:27:45.572829806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.574316 containerd[1465]: time="2024-12-13T01:27:45.574242913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 01:27:45.575832 containerd[1465]: time="2024-12-13T01:27:45.575762976Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.578867 containerd[1465]: time="2024-12-13T01:27:45.578827504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.580065 containerd[1465]: time="2024-12-13T01:27:45.579838368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.291233115s" Dec 13 01:27:45.580065 containerd[1465]: time="2024-12-13T01:27:45.579886792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:27:45.582069 containerd[1465]: time="2024-12-13T01:27:45.581334582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:27:45.603145 containerd[1465]: time="2024-12-13T01:27:45.603093269Z" level=info msg="CreateContainer within sandbox \"4dee5512211ccdc078c2f2e9d38e71f0af5640fb783fe0e36a3d59f3a2ddd5a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:27:45.630806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107442621.mount: Deactivated successfully. Dec 13 01:27:45.632032 containerd[1465]: time="2024-12-13T01:27:45.631607288Z" level=info msg="CreateContainer within sandbox \"4dee5512211ccdc078c2f2e9d38e71f0af5640fb783fe0e36a3d59f3a2ddd5a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0236d75fff6372b61de525e52753be66411c7b003868044113e94170ed6a6854\"" Dec 13 01:27:45.636248 containerd[1465]: time="2024-12-13T01:27:45.633463228Z" level=info msg="StartContainer for \"0236d75fff6372b61de525e52753be66411c7b003868044113e94170ed6a6854\"" Dec 13 01:27:45.690587 systemd[1]: Started cri-containerd-0236d75fff6372b61de525e52753be66411c7b003868044113e94170ed6a6854.scope - libcontainer container 0236d75fff6372b61de525e52753be66411c7b003868044113e94170ed6a6854. Dec 13 01:27:45.753407 containerd[1465]: time="2024-12-13T01:27:45.753201399Z" level=info msg="StartContainer for \"0236d75fff6372b61de525e52753be66411c7b003868044113e94170ed6a6854\" returns successfully" Dec 13 01:27:46.229628 kubelet[2536]: E1213 01:27:46.229526 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:47.330739 kubelet[2536]: I1213 01:27:47.330472 2536 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:48.230315 kubelet[2536]: E1213 01:27:48.230229 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:48.334156 kubelet[2536]: I1213 01:27:48.333311 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bfc6f65d7-q267v" podStartSLOduration=3.997624637 podStartE2EDuration="8.333283586s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="2024-12-13 01:27:41.245424869 +0000 UTC m=+14.164359867" lastFinishedPulling="2024-12-13 01:27:45.58108381 +0000 UTC m=+18.500018816" observedRunningTime="2024-12-13 01:27:46.344612962 +0000 UTC m=+19.263547964" watchObservedRunningTime="2024-12-13 01:27:48.333283586 +0000 UTC m=+21.252218595" Dec 13 01:27:49.693738 containerd[1465]: time="2024-12-13T01:27:49.693670257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:49.695039 containerd[1465]: time="2024-12-13T01:27:49.694950860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:27:49.698281 containerd[1465]: time="2024-12-13T01:27:49.696332339Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:49.699677 containerd[1465]: time="2024-12-13T01:27:49.699633010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:49.701364 containerd[1465]: time="2024-12-13T01:27:49.701298938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.119898397s" Dec 13 01:27:49.701544 containerd[1465]: time="2024-12-13T01:27:49.701514357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:27:49.704911 containerd[1465]: time="2024-12-13T01:27:49.704850843Z" level=info msg="CreateContainer within sandbox \"c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:27:49.725390 containerd[1465]: time="2024-12-13T01:27:49.725317190Z" level=info msg="CreateContainer within sandbox \"c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52\"" Dec 13 01:27:49.726580 containerd[1465]: time="2024-12-13T01:27:49.726519309Z" level=info msg="StartContainer for \"1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52\"" Dec 13 01:27:49.782610 systemd[1]: Started cri-containerd-1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52.scope - libcontainer container 1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52. Dec 13 01:27:49.820336 containerd[1465]: time="2024-12-13T01:27:49.820277575Z" level=info msg="StartContainer for \"1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52\" returns successfully" Dec 13 01:27:50.230150 kubelet[2536]: E1213 01:27:50.230069 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:50.689935 systemd[1]: cri-containerd-1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52.scope: Deactivated successfully. Dec 13 01:27:50.729032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52-rootfs.mount: Deactivated successfully. Dec 13 01:27:50.734395 kubelet[2536]: I1213 01:27:50.734364 2536 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:27:50.801504 systemd[1]: Created slice kubepods-burstable-podcee062b2_3528_4440_b732_1044f2d79299.slice - libcontainer container kubepods-burstable-podcee062b2_3528_4440_b732_1044f2d79299.slice. Dec 13 01:27:50.816335 systemd[1]: Created slice kubepods-besteffort-podc78149af_1946_4fbc_9d93_badab5c4fb43.slice - libcontainer container kubepods-besteffort-podc78149af_1946_4fbc_9d93_badab5c4fb43.slice. Dec 13 01:27:50.842390 systemd[1]: Created slice kubepods-besteffort-podd6b02353_2595_4ed5_9b02_d81f11de016f.slice - libcontainer container kubepods-besteffort-podd6b02353_2595_4ed5_9b02_d81f11de016f.slice. Dec 13 01:27:50.854595 systemd[1]: Created slice kubepods-besteffort-podca0e53dc_8a33_4adf_906d_b0bf232eb9c0.slice - libcontainer container kubepods-besteffort-podca0e53dc_8a33_4adf_906d_b0bf232eb9c0.slice. Dec 13 01:27:50.871412 systemd[1]: Created slice kubepods-burstable-podc2b53d76_5bdc_4901_a9e7_dfdb2cc2545b.slice - libcontainer container kubepods-burstable-podc2b53d76_5bdc_4901_a9e7_dfdb2cc2545b.slice. Dec 13 01:27:50.887305 kubelet[2536]: I1213 01:27:50.887252 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c78149af-1946-4fbc-9d93-badab5c4fb43-tigera-ca-bundle\") pod \"calico-kube-controllers-5f9796d998-b2p2w\" (UID: \"c78149af-1946-4fbc-9d93-badab5c4fb43\") " pod="calico-system/calico-kube-controllers-5f9796d998-b2p2w" Dec 13 01:27:50.923577 kubelet[2536]: I1213 01:27:50.887494 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee062b2-3528-4440-b732-1044f2d79299-config-volume\") pod \"coredns-6f6b679f8f-hzxx4\" (UID: \"cee062b2-3528-4440-b732-1044f2d79299\") " pod="kube-system/coredns-6f6b679f8f-hzxx4" Dec 13 01:27:50.923577 kubelet[2536]: I1213 01:27:50.887535 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lck\" (UniqueName: \"kubernetes.io/projected/c78149af-1946-4fbc-9d93-badab5c4fb43-kube-api-access-z7lck\") pod \"calico-kube-controllers-5f9796d998-b2p2w\" (UID: \"c78149af-1946-4fbc-9d93-badab5c4fb43\") " pod="calico-system/calico-kube-controllers-5f9796d998-b2p2w" Dec 13 01:27:50.923577 kubelet[2536]: I1213 01:27:50.887582 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6b02353-2595-4ed5-9b02-d81f11de016f-calico-apiserver-certs\") pod \"calico-apiserver-6f8dcc75bf-2tgk7\" (UID: \"d6b02353-2595-4ed5-9b02-d81f11de016f\") " pod="calico-apiserver/calico-apiserver-6f8dcc75bf-2tgk7" Dec 13 01:27:50.923577 kubelet[2536]: I1213 01:27:50.887614 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw256\" (UniqueName: \"kubernetes.io/projected/d6b02353-2595-4ed5-9b02-d81f11de016f-kube-api-access-pw256\") pod \"calico-apiserver-6f8dcc75bf-2tgk7\" (UID: \"d6b02353-2595-4ed5-9b02-d81f11de016f\") " pod="calico-apiserver/calico-apiserver-6f8dcc75bf-2tgk7" Dec 13 01:27:50.923577 kubelet[2536]: I1213 01:27:50.887645 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mld64\" (UniqueName: \"kubernetes.io/projected/ca0e53dc-8a33-4adf-906d-b0bf232eb9c0-kube-api-access-mld64\") pod \"calico-apiserver-6f8dcc75bf-xj2tb\" (UID: \"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0\") " pod="calico-apiserver/calico-apiserver-6f8dcc75bf-xj2tb" Dec 13 01:27:50.923904 kubelet[2536]: I1213 01:27:50.887714 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqht\" (UniqueName: \"kubernetes.io/projected/cee062b2-3528-4440-b732-1044f2d79299-kube-api-access-7hqht\") pod \"coredns-6f6b679f8f-hzxx4\" (UID: \"cee062b2-3528-4440-b732-1044f2d79299\") " pod="kube-system/coredns-6f6b679f8f-hzxx4" Dec 13 01:27:50.923904 kubelet[2536]: I1213 01:27:50.887747 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b-config-volume\") pod \"coredns-6f6b679f8f-p55lg\" (UID: \"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b\") " pod="kube-system/coredns-6f6b679f8f-p55lg" Dec 13 01:27:50.923904 kubelet[2536]: I1213 01:27:50.887781 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cz8\" (UniqueName: \"kubernetes.io/projected/c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b-kube-api-access-66cz8\") pod \"coredns-6f6b679f8f-p55lg\" (UID: \"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b\") " pod="kube-system/coredns-6f6b679f8f-p55lg" Dec 13 01:27:50.923904 kubelet[2536]: I1213 01:27:50.887814 2536 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca0e53dc-8a33-4adf-906d-b0bf232eb9c0-calico-apiserver-certs\") pod \"calico-apiserver-6f8dcc75bf-xj2tb\" (UID: \"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0\") " pod="calico-apiserver/calico-apiserver-6f8dcc75bf-xj2tb" Dec 13 01:27:51.115172 containerd[1465]: time="2024-12-13T01:27:51.114629580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hzxx4,Uid:cee062b2-3528-4440-b732-1044f2d79299,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:51.127537 containerd[1465]: time="2024-12-13T01:27:51.127480584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9796d998-b2p2w,Uid:c78149af-1946-4fbc-9d93-badab5c4fb43,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:51.149217 containerd[1465]: time="2024-12-13T01:27:51.149166370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-2tgk7,Uid:d6b02353-2595-4ed5-9b02-d81f11de016f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:51.164761 containerd[1465]: time="2024-12-13T01:27:51.164697333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-xj2tb,Uid:ca0e53dc-8a33-4adf-906d-b0bf232eb9c0,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:51.226468 containerd[1465]: time="2024-12-13T01:27:51.226259043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p55lg,Uid:c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:51.769437 containerd[1465]: time="2024-12-13T01:27:51.769355683Z" level=info msg="shim disconnected" id=1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52 namespace=k8s.io Dec 13 01:27:51.769437 containerd[1465]: time="2024-12-13T01:27:51.769426948Z" level=warning msg="cleaning up after shim disconnected" id=1de78f441354509df2d01b17c5e843814010c6ec193f05cc682acd68ea69fa52 namespace=k8s.io Dec 13 01:27:51.769437 containerd[1465]: time="2024-12-13T01:27:51.769440697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:51.790442 containerd[1465]: time="2024-12-13T01:27:51.789319353Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:27:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:27:52.047419 containerd[1465]: time="2024-12-13T01:27:52.047199395Z" level=error msg="Failed to destroy network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.048474 containerd[1465]: time="2024-12-13T01:27:52.048268592Z" level=error msg="encountered an error cleaning up failed sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.048839 containerd[1465]: time="2024-12-13T01:27:52.048797071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p55lg,Uid:c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.049418 kubelet[2536]: E1213 01:27:52.049228 2536 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.049418 kubelet[2536]: E1213 01:27:52.049321 2536 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-p55lg" Dec 13 01:27:52.049418 kubelet[2536]: E1213 01:27:52.049368 2536 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-p55lg" Dec 13 01:27:52.050175 kubelet[2536]: E1213 01:27:52.050099 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-p55lg_kube-system(c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-p55lg_kube-system(c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-p55lg" podUID="c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b" Dec 13 01:27:52.071111 containerd[1465]: time="2024-12-13T01:27:52.071027987Z" level=error msg="Failed to destroy network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.071907 containerd[1465]: time="2024-12-13T01:27:52.071843370Z" level=error msg="encountered an error cleaning up failed sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.072194 containerd[1465]: time="2024-12-13T01:27:52.072149617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9796d998-b2p2w,Uid:c78149af-1946-4fbc-9d93-badab5c4fb43,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.074122 kubelet[2536]: E1213 01:27:52.072713 2536 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.074122 kubelet[2536]: E1213 01:27:52.072797 2536 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9796d998-b2p2w" Dec 13 01:27:52.074122 kubelet[2536]: E1213 01:27:52.072830 2536 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9796d998-b2p2w" Dec 13 01:27:52.074452 kubelet[2536]: E1213 01:27:52.072889 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f9796d998-b2p2w_calico-system(c78149af-1946-4fbc-9d93-badab5c4fb43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f9796d998-b2p2w_calico-system(c78149af-1946-4fbc-9d93-badab5c4fb43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9796d998-b2p2w" podUID="c78149af-1946-4fbc-9d93-badab5c4fb43" Dec 13 01:27:52.085727 containerd[1465]: time="2024-12-13T01:27:52.085598458Z" level=error msg="Failed to destroy network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.086608 containerd[1465]: time="2024-12-13T01:27:52.086547367Z" level=error msg="encountered an error cleaning up failed sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.086911 containerd[1465]: time="2024-12-13T01:27:52.086726434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-2tgk7,Uid:d6b02353-2595-4ed5-9b02-d81f11de016f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.087413 kubelet[2536]: E1213 01:27:52.087116 2536 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.087413 kubelet[2536]: E1213 01:27:52.087186 2536 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-2tgk7" Dec 13 01:27:52.087413 kubelet[2536]: E1213 01:27:52.087221 2536 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-2tgk7" Dec 13 01:27:52.090475 kubelet[2536]: E1213 01:27:52.087303 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8dcc75bf-2tgk7_calico-apiserver(d6b02353-2595-4ed5-9b02-d81f11de016f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8dcc75bf-2tgk7_calico-apiserver(d6b02353-2595-4ed5-9b02-d81f11de016f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-2tgk7" podUID="d6b02353-2595-4ed5-9b02-d81f11de016f" Dec 13 01:27:52.090759 containerd[1465]: time="2024-12-13T01:27:52.090718926Z" level=error msg="Failed to destroy network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.091320 containerd[1465]: time="2024-12-13T01:27:52.091262268Z" level=error msg="encountered an error cleaning up failed sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.091479 containerd[1465]: time="2024-12-13T01:27:52.091361076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-xj2tb,Uid:ca0e53dc-8a33-4adf-906d-b0bf232eb9c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.091621 kubelet[2536]: E1213 01:27:52.091573 2536 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.091733 kubelet[2536]: E1213 01:27:52.091647 2536 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-xj2tb" Dec 13 01:27:52.091733 kubelet[2536]: E1213 01:27:52.091678 2536 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-xj2tb" Dec 13 01:27:52.091906 kubelet[2536]: E1213 01:27:52.091742 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8dcc75bf-xj2tb_calico-apiserver(ca0e53dc-8a33-4adf-906d-b0bf232eb9c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8dcc75bf-xj2tb_calico-apiserver(ca0e53dc-8a33-4adf-906d-b0bf232eb9c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-xj2tb" podUID="ca0e53dc-8a33-4adf-906d-b0bf232eb9c0" Dec 13 01:27:52.096646 containerd[1465]: time="2024-12-13T01:27:52.096578819Z" level=error msg="Failed to destroy network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.098300 containerd[1465]: time="2024-12-13T01:27:52.098029563Z" level=error msg="encountered an error cleaning up failed sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.098545 containerd[1465]: time="2024-12-13T01:27:52.098499388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hzxx4,Uid:cee062b2-3528-4440-b732-1044f2d79299,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.098818 kubelet[2536]: E1213 01:27:52.098776 2536 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.098968 kubelet[2536]: E1213 01:27:52.098841 2536 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hzxx4" Dec 13 01:27:52.098968 kubelet[2536]: E1213 01:27:52.098873 2536 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hzxx4" Dec 13 01:27:52.098968 kubelet[2536]: E1213 01:27:52.098934 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hzxx4_kube-system(cee062b2-3528-4440-b732-1044f2d79299)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hzxx4_kube-system(cee062b2-3528-4440-b732-1044f2d79299)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hzxx4" podUID="cee062b2-3528-4440-b732-1044f2d79299" Dec 13 01:27:52.238137 systemd[1]: Created slice kubepods-besteffort-poda545ca75_b4b0_41f8_ba2f_947389823539.slice - libcontainer container kubepods-besteffort-poda545ca75_b4b0_41f8_ba2f_947389823539.slice. Dec 13 01:27:52.243406 containerd[1465]: time="2024-12-13T01:27:52.243030556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bb5cr,Uid:a545ca75-b4b0-41f8-ba2f-947389823539,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:52.320839 containerd[1465]: time="2024-12-13T01:27:52.320672379Z" level=error msg="Failed to destroy network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.321164 containerd[1465]: time="2024-12-13T01:27:52.321101862Z" level=error msg="encountered an error cleaning up failed sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.324013 containerd[1465]: time="2024-12-13T01:27:52.321197131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bb5cr,Uid:a545ca75-b4b0-41f8-ba2f-947389823539,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.324110 kubelet[2536]: E1213 01:27:52.323419 2536 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.324110 kubelet[2536]: E1213 01:27:52.323492 2536 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bb5cr" Dec 13 01:27:52.324110 kubelet[2536]: E1213 01:27:52.323525 2536 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bb5cr" Dec 13 01:27:52.324300 kubelet[2536]: E1213 01:27:52.323598 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bb5cr_calico-system(a545ca75-b4b0-41f8-ba2f-947389823539)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bb5cr_calico-system(a545ca75-b4b0-41f8-ba2f-947389823539)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:52.352602 kubelet[2536]: I1213 01:27:52.352531 2536 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:27:52.354712 containerd[1465]: time="2024-12-13T01:27:52.354614923Z" level=info msg="StopPodSandbox for \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\"" Dec 13 01:27:52.354996 containerd[1465]: time="2024-12-13T01:27:52.354849782Z" level=info msg="Ensure that sandbox ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2 in task-service has been cleanup successfully" Dec 13 01:27:52.368903 containerd[1465]: time="2024-12-13T01:27:52.368569302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:27:52.370242 kubelet[2536]: I1213 01:27:52.370114 2536 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:27:52.371370 containerd[1465]: time="2024-12-13T01:27:52.370940205Z" level=info msg="StopPodSandbox for \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\"" Dec 13 01:27:52.371370 containerd[1465]: time="2024-12-13T01:27:52.371218636Z" level=info msg="Ensure that sandbox 83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071 in task-service has been cleanup successfully" Dec 13 01:27:52.375006 kubelet[2536]: I1213 01:27:52.374979 2536 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:27:52.377596 containerd[1465]: time="2024-12-13T01:27:52.377531898Z" level=info msg="StopPodSandbox for \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\"" Dec 13 01:27:52.377851 containerd[1465]: time="2024-12-13T01:27:52.377767751Z" level=info msg="Ensure that sandbox 7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d in task-service has been cleanup successfully" Dec 13 01:27:52.380879 kubelet[2536]: I1213 01:27:52.380450 2536 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:27:52.381324 containerd[1465]: time="2024-12-13T01:27:52.381264493Z" level=info msg="StopPodSandbox for \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\"" Dec 13 01:27:52.381836 containerd[1465]: time="2024-12-13T01:27:52.381808454Z" level=info msg="Ensure that sandbox c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238 in task-service has been cleanup successfully" Dec 13 01:27:52.388017 kubelet[2536]: I1213 01:27:52.387021 2536 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:27:52.389518 containerd[1465]: time="2024-12-13T01:27:52.389478637Z" level=info msg="StopPodSandbox for \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\"" Dec 13 01:27:52.393707 containerd[1465]: time="2024-12-13T01:27:52.389712854Z" level=info msg="Ensure that sandbox c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6 in task-service has been cleanup successfully" Dec 13 01:27:52.397246 kubelet[2536]: I1213 01:27:52.397222 2536 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:27:52.404909 containerd[1465]: time="2024-12-13T01:27:52.404776819Z" level=info msg="StopPodSandbox for \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\"" Dec 13 01:27:52.409380 containerd[1465]: time="2024-12-13T01:27:52.409018057Z" level=info msg="Ensure that sandbox e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e in task-service has been cleanup successfully" Dec 13 01:27:52.483370 containerd[1465]: time="2024-12-13T01:27:52.483290983Z" level=error msg="StopPodSandbox for \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\" failed" error="failed to destroy network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.484200 kubelet[2536]: E1213 01:27:52.483875 2536 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:27:52.484200 kubelet[2536]: E1213 01:27:52.483968 2536 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2"} Dec 13 01:27:52.484200 kubelet[2536]: E1213 01:27:52.484083 2536 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c78149af-1946-4fbc-9d93-badab5c4fb43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:52.484200 kubelet[2536]: E1213 01:27:52.484121 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c78149af-1946-4fbc-9d93-badab5c4fb43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9796d998-b2p2w" podUID="c78149af-1946-4fbc-9d93-badab5c4fb43" Dec 13 01:27:52.551403 containerd[1465]: time="2024-12-13T01:27:52.551187904Z" level=error msg="StopPodSandbox for \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\" failed" error="failed to destroy network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.552369 kubelet[2536]: E1213 01:27:52.552101 2536 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:27:52.552369 kubelet[2536]: E1213 01:27:52.552174 2536 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6"} Dec 13 01:27:52.552369 kubelet[2536]: E1213 01:27:52.552246 2536 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a545ca75-b4b0-41f8-ba2f-947389823539\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:52.552369 kubelet[2536]: E1213 01:27:52.552289 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a545ca75-b4b0-41f8-ba2f-947389823539\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bb5cr" podUID="a545ca75-b4b0-41f8-ba2f-947389823539" Dec 13 01:27:52.555665 containerd[1465]: time="2024-12-13T01:27:52.555593894Z" level=error msg="StopPodSandbox for \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\" failed" error="failed to destroy network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.556579 kubelet[2536]: E1213 01:27:52.556534 2536 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:27:52.556838 kubelet[2536]: E1213 01:27:52.556801 2536 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071"} Dec 13 01:27:52.556997 kubelet[2536]: E1213 01:27:52.556974 2536 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:52.557194 kubelet[2536]: E1213 01:27:52.557147 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-xj2tb" podUID="ca0e53dc-8a33-4adf-906d-b0bf232eb9c0" Dec 13 01:27:52.559129 containerd[1465]: time="2024-12-13T01:27:52.559033805Z" level=error msg="StopPodSandbox for \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\" failed" error="failed to destroy network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.560384 containerd[1465]: time="2024-12-13T01:27:52.559294794Z" level=error msg="StopPodSandbox for \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\" failed" error="failed to destroy network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.560485 kubelet[2536]: E1213 01:27:52.559498 2536 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:27:52.560485 kubelet[2536]: E1213 01:27:52.559554 2536 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d"} Dec 13 01:27:52.560609 kubelet[2536]: E1213 01:27:52.559596 2536 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6b02353-2595-4ed5-9b02-d81f11de016f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:52.561394 kubelet[2536]: E1213 01:27:52.560715 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6b02353-2595-4ed5-9b02-d81f11de016f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-2tgk7" podUID="d6b02353-2595-4ed5-9b02-d81f11de016f" Dec 13 01:27:52.561394 kubelet[2536]: E1213 01:27:52.561297 2536 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:27:52.561394 kubelet[2536]: E1213 01:27:52.561363 2536 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238"} Dec 13 01:27:52.561650 kubelet[2536]: E1213 01:27:52.561419 2536 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cee062b2-3528-4440-b732-1044f2d79299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:52.561650 kubelet[2536]: E1213 01:27:52.561456 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cee062b2-3528-4440-b732-1044f2d79299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hzxx4" podUID="cee062b2-3528-4440-b732-1044f2d79299" Dec 13 01:27:52.577605 containerd[1465]: time="2024-12-13T01:27:52.577389751Z" level=error msg="StopPodSandbox for \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\" failed" error="failed to destroy network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:52.577966 kubelet[2536]: E1213 01:27:52.577690 2536 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:27:52.577966 kubelet[2536]: E1213 01:27:52.577760 2536 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e"} Dec 13 01:27:52.577966 kubelet[2536]: E1213 01:27:52.577796 2536 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:52.577966 kubelet[2536]: E1213 01:27:52.577827 2536 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-p55lg" podUID="c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b" Dec 13 01:27:52.730686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2-shm.mount: Deactivated successfully. Dec 13 01:27:52.730835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071-shm.mount: Deactivated successfully. Dec 13 01:27:52.730934 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238-shm.mount: Deactivated successfully. Dec 13 01:27:59.045015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105334521.mount: Deactivated successfully. Dec 13 01:27:59.087646 containerd[1465]: time="2024-12-13T01:27:59.087581035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:59.088955 containerd[1465]: time="2024-12-13T01:27:59.088902242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:27:59.092027 containerd[1465]: time="2024-12-13T01:27:59.090188218Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:59.094412 containerd[1465]: time="2024-12-13T01:27:59.093186199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:59.094412 containerd[1465]: time="2024-12-13T01:27:59.094221367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.725350301s" Dec 13 01:27:59.094412 containerd[1465]: time="2024-12-13T01:27:59.094264304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:27:59.109412 containerd[1465]: time="2024-12-13T01:27:59.107792466Z" level=info msg="CreateContainer within sandbox \"c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:27:59.131626 containerd[1465]: time="2024-12-13T01:27:59.131572987Z" level=info msg="CreateContainer within sandbox \"c9509ddf0aac626c451a6cd90ab8376fca528e6bde5b33550571a1f3ae15cc06\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"65814a5692f03436ffee4bf7bd97ab335dad7299d294727a83d7704967ae88a0\"" Dec 13 01:27:59.132929 containerd[1465]: time="2024-12-13T01:27:59.132889574Z" level=info msg="StartContainer for \"65814a5692f03436ffee4bf7bd97ab335dad7299d294727a83d7704967ae88a0\"" Dec 13 01:27:59.179629 systemd[1]: Started cri-containerd-65814a5692f03436ffee4bf7bd97ab335dad7299d294727a83d7704967ae88a0.scope - libcontainer container 65814a5692f03436ffee4bf7bd97ab335dad7299d294727a83d7704967ae88a0. Dec 13 01:27:59.228334 containerd[1465]: time="2024-12-13T01:27:59.228159070Z" level=info msg="StartContainer for \"65814a5692f03436ffee4bf7bd97ab335dad7299d294727a83d7704967ae88a0\" returns successfully" Dec 13 01:27:59.345400 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:27:59.345554 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:27:59.481998 kubelet[2536]: I1213 01:27:59.481055 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l87t5" podStartSLOduration=1.468746761 podStartE2EDuration="19.481018846s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="2024-12-13 01:27:41.083745413 +0000 UTC m=+14.002680412" lastFinishedPulling="2024-12-13 01:27:59.096017492 +0000 UTC m=+32.014952497" observedRunningTime="2024-12-13 01:27:59.474656755 +0000 UTC m=+32.393591766" watchObservedRunningTime="2024-12-13 01:27:59.481018846 +0000 UTC m=+32.399953854" Dec 13 01:28:01.253407 kernel: bpftool[3820]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:28:01.528896 systemd-networkd[1363]: vxlan.calico: Link UP Dec 13 01:28:01.530072 systemd-networkd[1363]: vxlan.calico: Gained carrier Dec 13 01:28:02.629708 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Dec 13 01:28:04.832083 ntpd[1433]: Listen normally on 7 vxlan.calico 192.168.75.0:123 Dec 13 01:28:04.832686 ntpd[1433]: 13 Dec 01:28:04 ntpd[1433]: Listen normally on 7 vxlan.calico 192.168.75.0:123 Dec 13 01:28:04.832686 ntpd[1433]: 13 Dec 01:28:04 ntpd[1433]: Listen normally on 8 vxlan.calico [fe80::649b:fcff:fe33:110e%4]:123 Dec 13 01:28:04.832229 ntpd[1433]: Listen normally on 8 vxlan.calico [fe80::649b:fcff:fe33:110e%4]:123 Dec 13 01:28:05.232127 containerd[1465]: time="2024-12-13T01:28:05.231871102Z" level=info msg="StopPodSandbox for \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\"" Dec 13 01:28:05.235223 containerd[1465]: time="2024-12-13T01:28:05.232305988Z" level=info msg="StopPodSandbox for \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\"" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.317 [INFO][3917] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.317 [INFO][3917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" iface="eth0" netns="/var/run/netns/cni-b3531904-3fd8-2500-6412-63c369e57ed9" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.318 [INFO][3917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" iface="eth0" netns="/var/run/netns/cni-b3531904-3fd8-2500-6412-63c369e57ed9" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.319 [INFO][3917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" iface="eth0" netns="/var/run/netns/cni-b3531904-3fd8-2500-6412-63c369e57ed9" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.319 [INFO][3917] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.319 [INFO][3917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.373 [INFO][3933] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.374 [INFO][3933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.374 [INFO][3933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.382 [WARNING][3933] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.382 [INFO][3933] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.384 [INFO][3933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:05.394391 containerd[1465]: 2024-12-13 01:28:05.390 [INFO][3917] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:05.395423 containerd[1465]: time="2024-12-13T01:28:05.395308096Z" level=info msg="TearDown network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\" successfully" Dec 13 01:28:05.396389 containerd[1465]: time="2024-12-13T01:28:05.395568960Z" level=info msg="StopPodSandbox for \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\" returns successfully" Dec 13 01:28:05.397408 containerd[1465]: time="2024-12-13T01:28:05.396766493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hzxx4,Uid:cee062b2-3528-4440-b732-1044f2d79299,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:05.399517 systemd[1]: run-netns-cni\x2db3531904\x2d3fd8\x2d2500\x2d6412\x2d63c369e57ed9.mount: Deactivated successfully. Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.327 [INFO][3924] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.327 [INFO][3924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" iface="eth0" netns="/var/run/netns/cni-635e79db-84a4-b932-74a8-1d2c4680571d" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.328 [INFO][3924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" iface="eth0" netns="/var/run/netns/cni-635e79db-84a4-b932-74a8-1d2c4680571d" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.329 [INFO][3924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" iface="eth0" netns="/var/run/netns/cni-635e79db-84a4-b932-74a8-1d2c4680571d" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.329 [INFO][3924] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.329 [INFO][3924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.375 [INFO][3937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.375 [INFO][3937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.385 [INFO][3937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.398 [WARNING][3937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.398 [INFO][3937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.403 [INFO][3937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:05.408574 containerd[1465]: 2024-12-13 01:28:05.407 [INFO][3924] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:05.412379 containerd[1465]: time="2024-12-13T01:28:05.412039667Z" level=info msg="TearDown network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\" successfully" Dec 13 01:28:05.412379 containerd[1465]: time="2024-12-13T01:28:05.412077732Z" level=info msg="StopPodSandbox for \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\" returns successfully" Dec 13 01:28:05.414657 containerd[1465]: time="2024-12-13T01:28:05.414606820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p55lg,Uid:c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:05.416739 systemd[1]: run-netns-cni\x2d635e79db\x2d84a4\x2db932\x2d74a8\x2d1d2c4680571d.mount: Deactivated successfully. Dec 13 01:28:05.626519 systemd-networkd[1363]: cali3f91207b77e: Link UP Dec 13 01:28:05.626822 systemd-networkd[1363]: cali3f91207b77e: Gained carrier Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.504 [INFO][3945] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0 coredns-6f6b679f8f- kube-system cee062b2-3528-4440-b732-1044f2d79299 736 0 2024-12-13 01:27:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal coredns-6f6b679f8f-hzxx4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f91207b77e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.505 [INFO][3945] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.566 [INFO][3967] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" HandleID="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.580 [INFO][3967] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" HandleID="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319630), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-hzxx4", "timestamp":"2024-12-13 01:28:05.566845372 +0000 UTC"}, Hostname:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.580 [INFO][3967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.580 [INFO][3967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.580 [INFO][3967] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal' Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.583 [INFO][3967] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.589 [INFO][3967] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.595 [INFO][3967] ipam/ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.598 [INFO][3967] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.600 [INFO][3967] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.600 [INFO][3967] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.602 [INFO][3967] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784 Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.608 [INFO][3967] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.618 [INFO][3967] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.1/26] block=192.168.75.0/26 handle="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.618 [INFO][3967] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.1/26] handle="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.618 [INFO][3967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:05.660742 containerd[1465]: 2024-12-13 01:28:05.618 [INFO][3967] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.1/26] IPv6=[] ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" HandleID="k8s-pod-network.49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.663827 containerd[1465]: 2024-12-13 01:28:05.622 [INFO][3945] cni-plugin/k8s.go 386: Populated endpoint ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cee062b2-3528-4440-b732-1044f2d79299", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-hzxx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f91207b77e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:05.663827 containerd[1465]: 2024-12-13 01:28:05.622 [INFO][3945] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.1/32] ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.663827 containerd[1465]: 2024-12-13 01:28:05.622 [INFO][3945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f91207b77e ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.663827 containerd[1465]: 2024-12-13 01:28:05.627 [INFO][3945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.663827 containerd[1465]: 2024-12-13 01:28:05.631 [INFO][3945] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cee062b2-3528-4440-b732-1044f2d79299", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784", Pod:"coredns-6f6b679f8f-hzxx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f91207b77e", MAC:"a6:31:90:6e:40:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:05.663827 containerd[1465]: 2024-12-13 01:28:05.652 [INFO][3945] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784" Namespace="kube-system" Pod="coredns-6f6b679f8f-hzxx4" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:05.715210 containerd[1465]: time="2024-12-13T01:28:05.715061273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:05.715210 containerd[1465]: time="2024-12-13T01:28:05.715135142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:05.715569 containerd[1465]: time="2024-12-13T01:28:05.715152910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:05.715569 containerd[1465]: time="2024-12-13T01:28:05.715260202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:05.746835 systemd-networkd[1363]: cali082953b4e79: Link UP Dec 13 01:28:05.747762 systemd-networkd[1363]: cali082953b4e79: Gained carrier Dec 13 01:28:05.753598 systemd[1]: Started cri-containerd-49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784.scope - libcontainer container 49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784. Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.514 [INFO][3954] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0 coredns-6f6b679f8f- kube-system c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b 737 0 2024-12-13 01:27:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal coredns-6f6b679f8f-p55lg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali082953b4e79 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.514 [INFO][3954] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.573 [INFO][3971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" HandleID="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.589 [INFO][3971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" HandleID="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba8e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-p55lg", "timestamp":"2024-12-13 01:28:05.57386976 +0000 UTC"}, Hostname:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.589 [INFO][3971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.619 [INFO][3971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.619 [INFO][3971] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal' Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.686 [INFO][3971] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.697 [INFO][3971] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.706 [INFO][3971] ipam/ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.709 [INFO][3971] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.712 [INFO][3971] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.713 [INFO][3971] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.716 [INFO][3971] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282 Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.724 [INFO][3971] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.734 [INFO][3971] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.2/26] block=192.168.75.0/26 handle="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.734 [INFO][3971] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.2/26] handle="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.734 [INFO][3971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:05.774180 containerd[1465]: 2024-12-13 01:28:05.734 [INFO][3971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.2/26] IPv6=[] ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" HandleID="k8s-pod-network.9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.777175 containerd[1465]: 2024-12-13 01:28:05.740 [INFO][3954] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-p55lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082953b4e79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:05.777175 containerd[1465]: 2024-12-13 01:28:05.740 [INFO][3954] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.2/32] ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.777175 containerd[1465]: 2024-12-13 01:28:05.740 [INFO][3954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali082953b4e79 ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.777175 containerd[1465]: 2024-12-13 01:28:05.744 [INFO][3954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.777175 containerd[1465]: 2024-12-13 01:28:05.745 [INFO][3954] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282", Pod:"coredns-6f6b679f8f-p55lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082953b4e79", MAC:"1e:0a:9f:45:aa:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:05.777175 containerd[1465]: 2024-12-13 01:28:05.771 [INFO][3954] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282" Namespace="kube-system" Pod="coredns-6f6b679f8f-p55lg" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:05.821373 containerd[1465]: time="2024-12-13T01:28:05.820245353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:05.821872 containerd[1465]: time="2024-12-13T01:28:05.821702727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:05.822460 containerd[1465]: time="2024-12-13T01:28:05.822130813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:05.824814 containerd[1465]: time="2024-12-13T01:28:05.824746399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:05.869574 systemd[1]: Started cri-containerd-9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282.scope - libcontainer container 9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282. Dec 13 01:28:05.876283 containerd[1465]: time="2024-12-13T01:28:05.876116295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hzxx4,Uid:cee062b2-3528-4440-b732-1044f2d79299,Namespace:kube-system,Attempt:1,} returns sandbox id \"49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784\"" Dec 13 01:28:05.884787 containerd[1465]: time="2024-12-13T01:28:05.884540768Z" level=info msg="CreateContainer within sandbox \"49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:05.906046 containerd[1465]: time="2024-12-13T01:28:05.905979178Z" level=info msg="CreateContainer within sandbox \"49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9abd4a2fee0c9e666c613704f8904860fbf30d7da5214a730583562a8007e01b\"" Dec 13 01:28:05.908200 containerd[1465]: time="2024-12-13T01:28:05.907009210Z" level=info msg="StartContainer for \"9abd4a2fee0c9e666c613704f8904860fbf30d7da5214a730583562a8007e01b\"" Dec 13 01:28:05.952823 systemd[1]: Started cri-containerd-9abd4a2fee0c9e666c613704f8904860fbf30d7da5214a730583562a8007e01b.scope - libcontainer container 9abd4a2fee0c9e666c613704f8904860fbf30d7da5214a730583562a8007e01b. Dec 13 01:28:05.969579 containerd[1465]: time="2024-12-13T01:28:05.969532932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p55lg,Uid:c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b,Namespace:kube-system,Attempt:1,} returns sandbox id \"9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282\"" Dec 13 01:28:05.974133 containerd[1465]: time="2024-12-13T01:28:05.974093178Z" level=info msg="CreateContainer within sandbox \"9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:05.995432 containerd[1465]: time="2024-12-13T01:28:05.995299349Z" level=info msg="CreateContainer within sandbox \"9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"623af7bd3aeb4ad1b57786dcbe0791b5feb09de7c0b84f1fc4763d5ef4bc638e\"" Dec 13 01:28:05.996696 containerd[1465]: time="2024-12-13T01:28:05.996657270Z" level=info msg="StartContainer for \"623af7bd3aeb4ad1b57786dcbe0791b5feb09de7c0b84f1fc4763d5ef4bc638e\"" Dec 13 01:28:06.019751 containerd[1465]: time="2024-12-13T01:28:06.019681475Z" level=info msg="StartContainer for \"9abd4a2fee0c9e666c613704f8904860fbf30d7da5214a730583562a8007e01b\" returns successfully" Dec 13 01:28:06.056598 systemd[1]: Started cri-containerd-623af7bd3aeb4ad1b57786dcbe0791b5feb09de7c0b84f1fc4763d5ef4bc638e.scope - libcontainer container 623af7bd3aeb4ad1b57786dcbe0791b5feb09de7c0b84f1fc4763d5ef4bc638e. Dec 13 01:28:06.107321 containerd[1465]: time="2024-12-13T01:28:06.107164460Z" level=info msg="StartContainer for \"623af7bd3aeb4ad1b57786dcbe0791b5feb09de7c0b84f1fc4763d5ef4bc638e\" returns successfully" Dec 13 01:28:06.231044 containerd[1465]: time="2024-12-13T01:28:06.230548432Z" level=info msg="StopPodSandbox for \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\"" Dec 13 01:28:06.231306 containerd[1465]: time="2024-12-13T01:28:06.231272595Z" level=info msg="StopPodSandbox for \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\"" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.326 [INFO][4198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.326 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" iface="eth0" netns="/var/run/netns/cni-141a1b32-d46d-a8c8-a783-0b6ee911e597" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.327 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" iface="eth0" netns="/var/run/netns/cni-141a1b32-d46d-a8c8-a783-0b6ee911e597" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.327 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" iface="eth0" netns="/var/run/netns/cni-141a1b32-d46d-a8c8-a783-0b6ee911e597" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.327 [INFO][4198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.327 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.369 [INFO][4212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.369 [INFO][4212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.370 [INFO][4212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.383 [WARNING][4212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.383 [INFO][4212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.385 [INFO][4212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:06.388707 containerd[1465]: 2024-12-13 01:28:06.387 [INFO][4198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:06.390838 containerd[1465]: time="2024-12-13T01:28:06.388908174Z" level=info msg="TearDown network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\" successfully" Dec 13 01:28:06.390838 containerd[1465]: time="2024-12-13T01:28:06.388942992Z" level=info msg="StopPodSandbox for \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\" returns successfully" Dec 13 01:28:06.390838 containerd[1465]: time="2024-12-13T01:28:06.390025675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bb5cr,Uid:a545ca75-b4b0-41f8-ba2f-947389823539,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:06.417160 systemd[1]: run-netns-cni\x2d141a1b32\x2dd46d\x2da8c8\x2da783\x2d0b6ee911e597.mount: Deactivated successfully. Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.322 [INFO][4199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.323 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" iface="eth0" netns="/var/run/netns/cni-faf9cbc6-0538-6801-4a21-ec751a00b319" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.323 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" iface="eth0" netns="/var/run/netns/cni-faf9cbc6-0538-6801-4a21-ec751a00b319" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.324 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" iface="eth0" netns="/var/run/netns/cni-faf9cbc6-0538-6801-4a21-ec751a00b319" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.325 [INFO][4199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.325 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.383 [INFO][4211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.384 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.385 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.405 [WARNING][4211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.405 [INFO][4211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.410 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:06.429844 containerd[1465]: 2024-12-13 01:28:06.414 [INFO][4199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:06.435847 containerd[1465]: time="2024-12-13T01:28:06.435397810Z" level=info msg="TearDown network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\" successfully" Dec 13 01:28:06.435847 containerd[1465]: time="2024-12-13T01:28:06.435438467Z" level=info msg="StopPodSandbox for \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\" returns successfully" Dec 13 01:28:06.438689 systemd[1]: run-netns-cni\x2dfaf9cbc6\x2d0538\x2d6801\x2d4a21\x2dec751a00b319.mount: Deactivated successfully. Dec 13 01:28:06.441503 containerd[1465]: time="2024-12-13T01:28:06.441461288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-2tgk7,Uid:d6b02353-2595-4ed5-9b02-d81f11de016f,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:06.494674 kubelet[2536]: I1213 01:28:06.493985 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-p55lg" podStartSLOduration=32.493960132 podStartE2EDuration="32.493960132s" podCreationTimestamp="2024-12-13 01:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:06.492530296 +0000 UTC m=+39.411465304" watchObservedRunningTime="2024-12-13 01:28:06.493960132 +0000 UTC m=+39.412895137" Dec 13 01:28:06.735978 systemd-networkd[1363]: caliad2ea773366: Link UP Dec 13 01:28:06.737429 systemd-networkd[1363]: caliad2ea773366: Gained carrier Dec 13 01:28:06.758219 kubelet[2536]: I1213 01:28:06.757116 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hzxx4" podStartSLOduration=32.757089284 podStartE2EDuration="32.757089284s" podCreationTimestamp="2024-12-13 01:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:06.567033299 +0000 UTC m=+39.485968307" watchObservedRunningTime="2024-12-13 01:28:06.757089284 +0000 UTC m=+39.676024293" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.584 [INFO][4233] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0 calico-apiserver-6f8dcc75bf- calico-apiserver d6b02353-2595-4ed5-9b02-d81f11de016f 755 0 2024-12-13 01:27:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8dcc75bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal calico-apiserver-6f8dcc75bf-2tgk7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad2ea773366 [] []}} ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.586 [INFO][4233] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.661 [INFO][4252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" HandleID="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.685 [INFO][4252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" HandleID="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", "pod":"calico-apiserver-6f8dcc75bf-2tgk7", "timestamp":"2024-12-13 01:28:06.661749773 +0000 UTC"}, Hostname:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.685 [INFO][4252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.685 [INFO][4252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.685 [INFO][4252] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal' Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.689 [INFO][4252] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.700 [INFO][4252] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.706 [INFO][4252] ipam/ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.709 [INFO][4252] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.712 [INFO][4252] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.712 [INFO][4252] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.714 [INFO][4252] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26 Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.720 [INFO][4252] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.727 [INFO][4252] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.3/26] block=192.168.75.0/26 handle="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.727 [INFO][4252] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.3/26] handle="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.727 [INFO][4252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:06.760560 containerd[1465]: 2024-12-13 01:28:06.728 [INFO][4252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.3/26] IPv6=[] ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" HandleID="k8s-pod-network.a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.762995 containerd[1465]: 2024-12-13 01:28:06.731 [INFO][4233] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6b02353-2595-4ed5-9b02-d81f11de016f", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6f8dcc75bf-2tgk7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2ea773366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:06.762995 containerd[1465]: 2024-12-13 01:28:06.731 [INFO][4233] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.3/32] ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.762995 containerd[1465]: 2024-12-13 01:28:06.731 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad2ea773366 ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.762995 containerd[1465]: 2024-12-13 01:28:06.738 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.762995 containerd[1465]: 2024-12-13 01:28:06.739 [INFO][4233] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6b02353-2595-4ed5-9b02-d81f11de016f", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26", Pod:"calico-apiserver-6f8dcc75bf-2tgk7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2ea773366", MAC:"26:9b:5a:2e:e8:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:06.762995 containerd[1465]: 2024-12-13 01:28:06.755 [INFO][4233] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-2tgk7" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:06.803823 containerd[1465]: time="2024-12-13T01:28:06.803073818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:06.803823 containerd[1465]: time="2024-12-13T01:28:06.803226471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:06.803823 containerd[1465]: time="2024-12-13T01:28:06.803257332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:06.807577 containerd[1465]: time="2024-12-13T01:28:06.805823657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:06.844881 systemd[1]: Started cri-containerd-a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26.scope - libcontainer container a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26. Dec 13 01:28:06.857780 systemd-networkd[1363]: cali5e74b99ec47: Link UP Dec 13 01:28:06.859781 systemd-networkd[1363]: cali5e74b99ec47: Gained carrier Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.585 [INFO][4224] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0 csi-node-driver- calico-system a545ca75-b4b0-41f8-ba2f-947389823539 756 0 2024-12-13 01:27:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal csi-node-driver-bb5cr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5e74b99ec47 [] []}} ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.587 [INFO][4224] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.675 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" HandleID="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.688 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" HandleID="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", "pod":"csi-node-driver-bb5cr", "timestamp":"2024-12-13 01:28:06.675623158 +0000 UTC"}, Hostname:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.688 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.728 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.728 [INFO][4248] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal' Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.791 [INFO][4248] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.801 [INFO][4248] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.808 [INFO][4248] ipam/ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.810 [INFO][4248] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.814 [INFO][4248] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.814 [INFO][4248] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.816 [INFO][4248] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.826 [INFO][4248] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.845 [INFO][4248] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.4/26] block=192.168.75.0/26 handle="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.845 [INFO][4248] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.4/26] handle="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.847 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:06.885774 containerd[1465]: 2024-12-13 01:28:06.847 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.4/26] IPv6=[] ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" HandleID="k8s-pod-network.17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.892967 containerd[1465]: 2024-12-13 01:28:06.851 [INFO][4224] cni-plugin/k8s.go 386: Populated endpoint ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a545ca75-b4b0-41f8-ba2f-947389823539", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-bb5cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e74b99ec47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:06.892967 containerd[1465]: 2024-12-13 01:28:06.851 [INFO][4224] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.4/32] ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.892967 containerd[1465]: 2024-12-13 01:28:06.852 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e74b99ec47 ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.892967 containerd[1465]: 2024-12-13 01:28:06.860 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.892967 containerd[1465]: 2024-12-13 01:28:06.861 [INFO][4224] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a545ca75-b4b0-41f8-ba2f-947389823539", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d", Pod:"csi-node-driver-bb5cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e74b99ec47", MAC:"be:93:38:32:dc:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:06.892967 containerd[1465]: 2024-12-13 01:28:06.880 [INFO][4224] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d" Namespace="calico-system" Pod="csi-node-driver-bb5cr" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:06.941173 containerd[1465]: time="2024-12-13T01:28:06.939659564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:06.941173 containerd[1465]: time="2024-12-13T01:28:06.939990941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:06.941173 containerd[1465]: time="2024-12-13T01:28:06.940037570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:06.941173 containerd[1465]: time="2024-12-13T01:28:06.940437251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:06.984677 systemd[1]: Started cri-containerd-17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d.scope - libcontainer container 17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d. Dec 13 01:28:06.989399 containerd[1465]: time="2024-12-13T01:28:06.989336573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-2tgk7,Uid:d6b02353-2595-4ed5-9b02-d81f11de016f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26\"" Dec 13 01:28:06.993867 containerd[1465]: time="2024-12-13T01:28:06.993775154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:07.036907 containerd[1465]: time="2024-12-13T01:28:07.036846123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bb5cr,Uid:a545ca75-b4b0-41f8-ba2f-947389823539,Namespace:calico-system,Attempt:1,} returns sandbox id \"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d\"" Dec 13 01:28:07.232904 containerd[1465]: time="2024-12-13T01:28:07.232256075Z" level=info msg="StopPodSandbox for \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\"" Dec 13 01:28:07.233131 containerd[1465]: time="2024-12-13T01:28:07.233029691Z" level=info msg="StopPodSandbox for \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\"" Dec 13 01:28:07.301875 systemd-networkd[1363]: cali3f91207b77e: Gained IPv6LL Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.323 [INFO][4405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.323 [INFO][4405] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" iface="eth0" netns="/var/run/netns/cni-1dd010d9-0685-2513-8fb9-11810ff2fa9f" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.324 [INFO][4405] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" iface="eth0" netns="/var/run/netns/cni-1dd010d9-0685-2513-8fb9-11810ff2fa9f" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.325 [INFO][4405] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" iface="eth0" netns="/var/run/netns/cni-1dd010d9-0685-2513-8fb9-11810ff2fa9f" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.325 [INFO][4405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.325 [INFO][4405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.381 [INFO][4417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.381 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.381 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.391 [WARNING][4417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.391 [INFO][4417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.397 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.411111 containerd[1465]: 2024-12-13 01:28:07.403 [INFO][4405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:07.417011 containerd[1465]: time="2024-12-13T01:28:07.413513047Z" level=info msg="TearDown network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\" successfully" Dec 13 01:28:07.417011 containerd[1465]: time="2024-12-13T01:28:07.413687764Z" level=info msg="StopPodSandbox for \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\" returns successfully" Dec 13 01:28:07.418837 containerd[1465]: time="2024-12-13T01:28:07.418798165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-xj2tb,Uid:ca0e53dc-8a33-4adf-906d-b0bf232eb9c0,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:07.420187 systemd[1]: run-netns-cni\x2d1dd010d9\x2d0685\x2d2513\x2d8fb9\x2d11810ff2fa9f.mount: Deactivated successfully. Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.330 [INFO][4404] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.330 [INFO][4404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" iface="eth0" netns="/var/run/netns/cni-c0dee000-9e20-24a5-542c-bf9df2370300" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.332 [INFO][4404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" iface="eth0" netns="/var/run/netns/cni-c0dee000-9e20-24a5-542c-bf9df2370300" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.333 [INFO][4404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" iface="eth0" netns="/var/run/netns/cni-c0dee000-9e20-24a5-542c-bf9df2370300" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.333 [INFO][4404] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.333 [INFO][4404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.383 [INFO][4421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.384 [INFO][4421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.397 [INFO][4421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.417 [WARNING][4421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.418 [INFO][4421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.422 [INFO][4421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.427465 containerd[1465]: 2024-12-13 01:28:07.425 [INFO][4404] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:07.428763 containerd[1465]: time="2024-12-13T01:28:07.427737313Z" level=info msg="TearDown network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\" successfully" Dec 13 01:28:07.428763 containerd[1465]: time="2024-12-13T01:28:07.427768514Z" level=info msg="StopPodSandbox for \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\" returns successfully" Dec 13 01:28:07.430745 containerd[1465]: time="2024-12-13T01:28:07.430707449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9796d998-b2p2w,Uid:c78149af-1946-4fbc-9d93-badab5c4fb43,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:07.434026 systemd[1]: run-netns-cni\x2dc0dee000\x2d9e20\x2d24a5\x2d542c\x2dbf9df2370300.mount: Deactivated successfully. Dec 13 01:28:07.621770 systemd-networkd[1363]: cali082953b4e79: Gained IPv6LL Dec 13 01:28:07.696868 systemd-networkd[1363]: cali8c61d2a1d5e: Link UP Dec 13 01:28:07.698404 systemd-networkd[1363]: cali8c61d2a1d5e: Gained carrier Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.559 [INFO][4430] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0 calico-apiserver-6f8dcc75bf- calico-apiserver ca0e53dc-8a33-4adf-906d-b0bf232eb9c0 782 0 2024-12-13 01:27:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8dcc75bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal calico-apiserver-6f8dcc75bf-xj2tb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8c61d2a1d5e [] []}} ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.559 [INFO][4430] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.627 [INFO][4455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" HandleID="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.641 [INFO][4455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" HandleID="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000513c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", "pod":"calico-apiserver-6f8dcc75bf-xj2tb", "timestamp":"2024-12-13 01:28:07.627496041 +0000 UTC"}, Hostname:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.641 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.641 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.641 [INFO][4455] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal' Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.644 [INFO][4455] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.650 [INFO][4455] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.663 [INFO][4455] ipam/ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.667 [INFO][4455] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.671 [INFO][4455] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.671 [INFO][4455] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.672 [INFO][4455] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57 Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.679 [INFO][4455] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.688 [INFO][4455] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.5/26] block=192.168.75.0/26 handle="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.688 [INFO][4455] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.5/26] handle="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.688 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.719997 containerd[1465]: 2024-12-13 01:28:07.688 [INFO][4455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.5/26] IPv6=[] ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" HandleID="k8s-pod-network.5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.722096 containerd[1465]: 2024-12-13 01:28:07.691 [INFO][4430] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6f8dcc75bf-xj2tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c61d2a1d5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.722096 containerd[1465]: 2024-12-13 01:28:07.691 [INFO][4430] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.5/32] ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.722096 containerd[1465]: 2024-12-13 01:28:07.691 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c61d2a1d5e ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.722096 containerd[1465]: 2024-12-13 01:28:07.697 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.722096 containerd[1465]: 2024-12-13 01:28:07.698 [INFO][4430] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57", Pod:"calico-apiserver-6f8dcc75bf-xj2tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c61d2a1d5e", MAC:"c6:a0:73:3e:1c:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.722096 containerd[1465]: 2024-12-13 01:28:07.716 [INFO][4430] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dcc75bf-xj2tb" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:07.766017 containerd[1465]: time="2024-12-13T01:28:07.763127056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:07.766017 containerd[1465]: time="2024-12-13T01:28:07.763203243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:07.766017 containerd[1465]: time="2024-12-13T01:28:07.763229146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.766017 containerd[1465]: time="2024-12-13T01:28:07.763787341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.806622 systemd[1]: Started cri-containerd-5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57.scope - libcontainer container 5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57. Dec 13 01:28:07.813991 systemd-networkd[1363]: caliad2ea773366: Gained IPv6LL Dec 13 01:28:07.836687 systemd-networkd[1363]: califaec76748f9: Link UP Dec 13 01:28:07.838134 systemd-networkd[1363]: califaec76748f9: Gained carrier Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.583 [INFO][4441] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0 calico-kube-controllers-5f9796d998- calico-system c78149af-1946-4fbc-9d93-badab5c4fb43 783 0 2024-12-13 01:27:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f9796d998 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal calico-kube-controllers-5f9796d998-b2p2w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califaec76748f9 [] []}} ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.583 [INFO][4441] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.649 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" HandleID="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.666 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" HandleID="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ede10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5f9796d998-b2p2w", "timestamp":"2024-12-13 01:28:07.649143214 +0000 UTC"}, Hostname:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.666 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.688 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.688 [INFO][4460] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal' Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.746 [INFO][4460] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.765 [INFO][4460] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.779 [INFO][4460] ipam/ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.783 [INFO][4460] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.788 [INFO][4460] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.789 [INFO][4460] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.791 [INFO][4460] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.802 [INFO][4460] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.819 [INFO][4460] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.6/26] block=192.168.75.0/26 handle="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.820 [INFO][4460] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.6/26] handle="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" host="ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal" Dec 13 01:28:07.871521 containerd[1465]: 2024-12-13 01:28:07.820 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.874102 containerd[1465]: 2024-12-13 01:28:07.820 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.6/26] IPv6=[] ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" HandleID="k8s-pod-network.acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.874102 containerd[1465]: 2024-12-13 01:28:07.827 [INFO][4441] cni-plugin/k8s.go 386: Populated endpoint ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0", GenerateName:"calico-kube-controllers-5f9796d998-", Namespace:"calico-system", SelfLink:"", UID:"c78149af-1946-4fbc-9d93-badab5c4fb43", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9796d998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5f9796d998-b2p2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaec76748f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.874102 containerd[1465]: 2024-12-13 01:28:07.827 [INFO][4441] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.6/32] ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.874102 containerd[1465]: 2024-12-13 01:28:07.827 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califaec76748f9 ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.874102 containerd[1465]: 2024-12-13 01:28:07.837 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.874102 containerd[1465]: 2024-12-13 01:28:07.842 [INFO][4441] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0", GenerateName:"calico-kube-controllers-5f9796d998-", Namespace:"calico-system", SelfLink:"", UID:"c78149af-1946-4fbc-9d93-badab5c4fb43", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9796d998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c", Pod:"calico-kube-controllers-5f9796d998-b2p2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaec76748f9", MAC:"ee:67:ae:21:a2:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.874915 containerd[1465]: 2024-12-13 01:28:07.861 [INFO][4441] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c" Namespace="calico-system" Pod="calico-kube-controllers-5f9796d998-b2p2w" WorkloadEndpoint="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:07.965359 containerd[1465]: time="2024-12-13T01:28:07.964782577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:07.965359 containerd[1465]: time="2024-12-13T01:28:07.965025766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:07.965359 containerd[1465]: time="2024-12-13T01:28:07.965092608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.965359 containerd[1465]: time="2024-12-13T01:28:07.965265422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.972644 containerd[1465]: time="2024-12-13T01:28:07.972316987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dcc75bf-xj2tb,Uid:ca0e53dc-8a33-4adf-906d-b0bf232eb9c0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57\"" Dec 13 01:28:08.002527 systemd[1]: Started cri-containerd-acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c.scope - libcontainer container acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c. Dec 13 01:28:08.077074 containerd[1465]: time="2024-12-13T01:28:08.076909564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9796d998-b2p2w,Uid:c78149af-1946-4fbc-9d93-badab5c4fb43,Namespace:calico-system,Attempt:1,} returns sandbox id \"acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c\"" Dec 13 01:28:08.517699 systemd-networkd[1363]: cali5e74b99ec47: Gained IPv6LL Dec 13 01:28:08.838147 systemd-networkd[1363]: cali8c61d2a1d5e: Gained IPv6LL Dec 13 01:28:09.369962 containerd[1465]: time="2024-12-13T01:28:09.369882806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:09.371611 containerd[1465]: time="2024-12-13T01:28:09.371521575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:28:09.373301 containerd[1465]: time="2024-12-13T01:28:09.373221183Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:09.378391 containerd[1465]: time="2024-12-13T01:28:09.377859206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:09.379206 containerd[1465]: time="2024-12-13T01:28:09.378842629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.384965788s" Dec 13 01:28:09.379206 containerd[1465]: time="2024-12-13T01:28:09.378890305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:28:09.380849 containerd[1465]: time="2024-12-13T01:28:09.380821156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:28:09.383929 containerd[1465]: time="2024-12-13T01:28:09.383894017Z" level=info msg="CreateContainer within sandbox \"a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:09.402763 containerd[1465]: time="2024-12-13T01:28:09.402713524Z" level=info msg="CreateContainer within sandbox \"a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"13e449017a857c5e076e3417a36cfe7d46ab1bfcd171ba3c7681f45cee1e24f5\"" Dec 13 01:28:09.406059 containerd[1465]: time="2024-12-13T01:28:09.403369095Z" level=info msg="StartContainer for \"13e449017a857c5e076e3417a36cfe7d46ab1bfcd171ba3c7681f45cee1e24f5\"" Dec 13 01:28:09.466564 systemd[1]: Started cri-containerd-13e449017a857c5e076e3417a36cfe7d46ab1bfcd171ba3c7681f45cee1e24f5.scope - libcontainer container 13e449017a857c5e076e3417a36cfe7d46ab1bfcd171ba3c7681f45cee1e24f5. Dec 13 01:28:09.530658 containerd[1465]: time="2024-12-13T01:28:09.530153249Z" level=info msg="StartContainer for \"13e449017a857c5e076e3417a36cfe7d46ab1bfcd171ba3c7681f45cee1e24f5\" returns successfully" Dec 13 01:28:09.797742 systemd-networkd[1363]: califaec76748f9: Gained IPv6LL Dec 13 01:28:10.544283 kubelet[2536]: I1213 01:28:10.544138 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-2tgk7" podStartSLOduration=28.156423466 podStartE2EDuration="30.544108817s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="2024-12-13 01:28:06.992865321 +0000 UTC m=+39.911800306" lastFinishedPulling="2024-12-13 01:28:09.380550657 +0000 UTC m=+42.299485657" observedRunningTime="2024-12-13 01:28:10.541270251 +0000 UTC m=+43.460205261" watchObservedRunningTime="2024-12-13 01:28:10.544108817 +0000 UTC m=+43.463043825" Dec 13 01:28:10.560717 containerd[1465]: time="2024-12-13T01:28:10.560642891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:10.563811 containerd[1465]: time="2024-12-13T01:28:10.563755057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:28:10.566998 containerd[1465]: time="2024-12-13T01:28:10.566961758Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:10.570944 containerd[1465]: time="2024-12-13T01:28:10.570586196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:10.576412 containerd[1465]: time="2024-12-13T01:28:10.576270610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.194996471s" Dec 13 01:28:10.576412 containerd[1465]: time="2024-12-13T01:28:10.576314883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:28:10.578645 containerd[1465]: time="2024-12-13T01:28:10.578603294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:10.579692 containerd[1465]: time="2024-12-13T01:28:10.579646576Z" level=info msg="CreateContainer within sandbox \"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:28:10.609258 containerd[1465]: time="2024-12-13T01:28:10.609091813Z" level=info msg="CreateContainer within sandbox \"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1dcf8ac1fea9338444fcdf98e46e9f078a2efd58c5df141bdfed7c5d9fe1160c\"" Dec 13 01:28:10.615135 containerd[1465]: time="2024-12-13T01:28:10.614202165Z" level=info msg="StartContainer for \"1dcf8ac1fea9338444fcdf98e46e9f078a2efd58c5df141bdfed7c5d9fe1160c\"" Dec 13 01:28:10.631114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053982632.mount: Deactivated successfully. Dec 13 01:28:10.704619 systemd[1]: Started cri-containerd-1dcf8ac1fea9338444fcdf98e46e9f078a2efd58c5df141bdfed7c5d9fe1160c.scope - libcontainer container 1dcf8ac1fea9338444fcdf98e46e9f078a2efd58c5df141bdfed7c5d9fe1160c. Dec 13 01:28:10.796631 containerd[1465]: time="2024-12-13T01:28:10.796396369Z" level=info msg="StartContainer for \"1dcf8ac1fea9338444fcdf98e46e9f078a2efd58c5df141bdfed7c5d9fe1160c\" returns successfully" Dec 13 01:28:10.796787 containerd[1465]: time="2024-12-13T01:28:10.796677784Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:10.798898 containerd[1465]: time="2024-12-13T01:28:10.798457348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:28:10.807813 containerd[1465]: time="2024-12-13T01:28:10.807710167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 229.060546ms" Dec 13 01:28:10.807813 containerd[1465]: time="2024-12-13T01:28:10.807772062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:28:10.811750 containerd[1465]: time="2024-12-13T01:28:10.811717849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:28:10.814160 containerd[1465]: time="2024-12-13T01:28:10.814025852Z" level=info msg="CreateContainer within sandbox \"5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:10.841387 containerd[1465]: time="2024-12-13T01:28:10.841262498Z" level=info msg="CreateContainer within sandbox \"5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d02ec5b83d3a6193bcfd19418d166900614ae22915d198f67285963dac6a739\"" Dec 13 01:28:10.844740 containerd[1465]: time="2024-12-13T01:28:10.844582428Z" level=info msg="StartContainer for \"4d02ec5b83d3a6193bcfd19418d166900614ae22915d198f67285963dac6a739\"" Dec 13 01:28:10.910752 systemd[1]: Started cri-containerd-4d02ec5b83d3a6193bcfd19418d166900614ae22915d198f67285963dac6a739.scope - libcontainer container 4d02ec5b83d3a6193bcfd19418d166900614ae22915d198f67285963dac6a739. Dec 13 01:28:11.013380 containerd[1465]: time="2024-12-13T01:28:11.011999697Z" level=info msg="StartContainer for \"4d02ec5b83d3a6193bcfd19418d166900614ae22915d198f67285963dac6a739\" returns successfully" Dec 13 01:28:11.613974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039065490.mount: Deactivated successfully. Dec 13 01:28:11.832136 ntpd[1433]: Listen normally on 9 cali3f91207b77e [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:28:11.833358 ntpd[1433]: 13 Dec 01:28:11 ntpd[1433]: Listen normally on 9 cali3f91207b77e [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:28:11.833358 ntpd[1433]: 13 Dec 01:28:11 ntpd[1433]: Listen normally on 10 cali082953b4e79 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:28:11.833358 ntpd[1433]: 13 Dec 01:28:11 ntpd[1433]: Listen normally on 11 caliad2ea773366 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:28:11.833358 ntpd[1433]: 13 Dec 01:28:11 ntpd[1433]: Listen normally on 12 cali5e74b99ec47 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:28:11.833358 ntpd[1433]: 13 Dec 01:28:11 ntpd[1433]: Listen normally on 13 cali8c61d2a1d5e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:28:11.833358 ntpd[1433]: 13 Dec 01:28:11 ntpd[1433]: Listen normally on 14 califaec76748f9 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:28:11.832272 ntpd[1433]: Listen normally on 10 cali082953b4e79 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:28:11.832392 ntpd[1433]: Listen normally on 11 caliad2ea773366 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:28:11.832478 ntpd[1433]: Listen normally on 12 cali5e74b99ec47 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:28:11.832536 ntpd[1433]: Listen normally on 13 cali8c61d2a1d5e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:28:11.832590 ntpd[1433]: Listen normally on 14 califaec76748f9 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:28:12.228964 systemd[1]: run-containerd-runc-k8s.io-65814a5692f03436ffee4bf7bd97ab335dad7299d294727a83d7704967ae88a0-runc.7gnkhZ.mount: Deactivated successfully. Dec 13 01:28:12.532389 kubelet[2536]: I1213 01:28:12.532126 2536 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:12.601365 kubelet[2536]: I1213 01:28:12.601067 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f8dcc75bf-xj2tb" podStartSLOduration=29.765902089 podStartE2EDuration="32.601038849s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="2024-12-13 01:28:07.975790047 +0000 UTC m=+40.894725309" lastFinishedPulling="2024-12-13 01:28:10.810927076 +0000 UTC m=+43.729862069" observedRunningTime="2024-12-13 01:28:11.547888472 +0000 UTC m=+44.466823482" watchObservedRunningTime="2024-12-13 01:28:12.601038849 +0000 UTC m=+45.519973868" Dec 13 01:28:13.457586 containerd[1465]: time="2024-12-13T01:28:13.457510444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:13.459106 containerd[1465]: time="2024-12-13T01:28:13.458888850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:28:13.460377 containerd[1465]: time="2024-12-13T01:28:13.460221593Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:13.464765 containerd[1465]: time="2024-12-13T01:28:13.464626121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:13.466397 containerd[1465]: time="2024-12-13T01:28:13.465718182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.653954995s" Dec 13 01:28:13.466397 containerd[1465]: time="2024-12-13T01:28:13.465774603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:28:13.467768 containerd[1465]: time="2024-12-13T01:28:13.467740495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:28:13.487941 containerd[1465]: time="2024-12-13T01:28:13.487898819Z" level=info msg="CreateContainer within sandbox \"acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:28:13.508162 containerd[1465]: time="2024-12-13T01:28:13.508103673Z" level=info msg="CreateContainer within sandbox \"acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3cd24432176c098cd4e62471eac3c4c6e2956ad7152f32044bb4c5e5c7e5b936\"" Dec 13 01:28:13.510468 containerd[1465]: time="2024-12-13T01:28:13.509141609Z" level=info msg="StartContainer for \"3cd24432176c098cd4e62471eac3c4c6e2956ad7152f32044bb4c5e5c7e5b936\"" Dec 13 01:28:13.564577 systemd[1]: Started cri-containerd-3cd24432176c098cd4e62471eac3c4c6e2956ad7152f32044bb4c5e5c7e5b936.scope - libcontainer container 3cd24432176c098cd4e62471eac3c4c6e2956ad7152f32044bb4c5e5c7e5b936. Dec 13 01:28:13.623134 containerd[1465]: time="2024-12-13T01:28:13.622949681Z" level=info msg="StartContainer for \"3cd24432176c098cd4e62471eac3c4c6e2956ad7152f32044bb4c5e5c7e5b936\" returns successfully" Dec 13 01:28:14.579789 kubelet[2536]: I1213 01:28:14.578066 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f9796d998-b2p2w" podStartSLOduration=29.190992381 podStartE2EDuration="34.578039743s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="2024-12-13 01:28:08.079989682 +0000 UTC m=+40.998924681" lastFinishedPulling="2024-12-13 01:28:13.467037042 +0000 UTC m=+46.385972043" observedRunningTime="2024-12-13 01:28:14.577884691 +0000 UTC m=+47.496819702" watchObservedRunningTime="2024-12-13 01:28:14.578039743 +0000 UTC m=+47.496974751" Dec 13 01:28:14.865038 containerd[1465]: time="2024-12-13T01:28:14.864884207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:14.867078 containerd[1465]: time="2024-12-13T01:28:14.866830316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:28:14.868790 containerd[1465]: time="2024-12-13T01:28:14.868713056Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:14.873593 containerd[1465]: time="2024-12-13T01:28:14.873557811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:14.875493 containerd[1465]: time="2024-12-13T01:28:14.875448468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.407497607s" Dec 13 01:28:14.875648 containerd[1465]: time="2024-12-13T01:28:14.875496279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:28:14.879365 containerd[1465]: time="2024-12-13T01:28:14.879196502Z" level=info msg="CreateContainer within sandbox \"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:28:14.898767 containerd[1465]: time="2024-12-13T01:28:14.898710651Z" level=info msg="CreateContainer within sandbox \"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aa68dce36f1aea9b344c5b1fe7868e256fc06a5507a8a27ba3e3577f62c9a274\"" Dec 13 01:28:14.901966 containerd[1465]: time="2024-12-13T01:28:14.900742692Z" level=info msg="StartContainer for \"aa68dce36f1aea9b344c5b1fe7868e256fc06a5507a8a27ba3e3577f62c9a274\"" Dec 13 01:28:14.954572 systemd[1]: Started cri-containerd-aa68dce36f1aea9b344c5b1fe7868e256fc06a5507a8a27ba3e3577f62c9a274.scope - libcontainer container aa68dce36f1aea9b344c5b1fe7868e256fc06a5507a8a27ba3e3577f62c9a274. Dec 13 01:28:14.997116 containerd[1465]: time="2024-12-13T01:28:14.997047941Z" level=info msg="StartContainer for \"aa68dce36f1aea9b344c5b1fe7868e256fc06a5507a8a27ba3e3577f62c9a274\" returns successfully" Dec 13 01:28:15.363399 kubelet[2536]: I1213 01:28:15.363358 2536 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:28:15.363399 kubelet[2536]: I1213 01:28:15.363416 2536 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:28:15.556628 kubelet[2536]: I1213 01:28:15.555948 2536 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:17.098888 kubelet[2536]: I1213 01:28:17.098282 2536 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:17.195112 kubelet[2536]: I1213 01:28:17.194427 2536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bb5cr" podStartSLOduration=29.356131898 podStartE2EDuration="37.194400801s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="2024-12-13 01:28:07.038785504 +0000 UTC m=+39.957720499" lastFinishedPulling="2024-12-13 01:28:14.877054405 +0000 UTC m=+47.795989402" observedRunningTime="2024-12-13 01:28:15.575753827 +0000 UTC m=+48.494688837" watchObservedRunningTime="2024-12-13 01:28:17.194400801 +0000 UTC m=+50.113335810" Dec 13 01:28:22.448159 kubelet[2536]: I1213 01:28:22.447955 2536 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:23.020795 systemd[1]: Started sshd@7-10.128.0.34:22-147.75.109.163:59966.service - OpenSSH per-connection server daemon (147.75.109.163:59966). Dec 13 01:28:23.334717 sshd[4881]: Accepted publickey for core from 147.75.109.163 port 59966 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:23.334817 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:23.342603 systemd-logind[1452]: New session 8 of user core. Dec 13 01:28:23.355579 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:28:23.665680 sshd[4881]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:23.674193 systemd[1]: sshd@7-10.128.0.34:22-147.75.109.163:59966.service: Deactivated successfully. Dec 13 01:28:23.675722 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:28:23.681635 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:28:23.687807 systemd-logind[1452]: Removed session 8. Dec 13 01:28:27.234756 containerd[1465]: time="2024-12-13T01:28:27.234653181Z" level=info msg="StopPodSandbox for \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\"" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.287 [WARNING][4908] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cee062b2-3528-4440-b732-1044f2d79299", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784", Pod:"coredns-6f6b679f8f-hzxx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f91207b77e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.287 [INFO][4908] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.287 [INFO][4908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" iface="eth0" netns="" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.287 [INFO][4908] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.288 [INFO][4908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.315 [INFO][4914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.315 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.315 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.326 [WARNING][4914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.326 [INFO][4914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.328 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:27.331302 containerd[1465]: 2024-12-13 01:28:27.329 [INFO][4908] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.331302 containerd[1465]: time="2024-12-13T01:28:27.331151541Z" level=info msg="TearDown network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\" successfully" Dec 13 01:28:27.331302 containerd[1465]: time="2024-12-13T01:28:27.331180413Z" level=info msg="StopPodSandbox for \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\" returns successfully" Dec 13 01:28:27.332721 containerd[1465]: time="2024-12-13T01:28:27.332678986Z" level=info msg="RemovePodSandbox for \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\"" Dec 13 01:28:27.332721 containerd[1465]: time="2024-12-13T01:28:27.332726791Z" level=info msg="Forcibly stopping sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\"" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.387 [WARNING][4932] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cee062b2-3528-4440-b732-1044f2d79299", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"49d3174ebb295b119b768e2909a5ca6f954156c30f0be4838fa4cc9c25267784", Pod:"coredns-6f6b679f8f-hzxx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f91207b77e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.389 [INFO][4932] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.389 [INFO][4932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" iface="eth0" netns="" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.389 [INFO][4932] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.389 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.415 [INFO][4938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.415 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.415 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.424 [WARNING][4938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.424 [INFO][4938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" HandleID="k8s-pod-network.c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--hzxx4-eth0" Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.426 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:27.430171 containerd[1465]: 2024-12-13 01:28:27.427 [INFO][4932] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238" Dec 13 01:28:27.430171 containerd[1465]: time="2024-12-13T01:28:27.429798674Z" level=info msg="TearDown network for sandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\" successfully" Dec 13 01:28:27.436665 containerd[1465]: time="2024-12-13T01:28:27.436105455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:27.436665 containerd[1465]: time="2024-12-13T01:28:27.436386295Z" level=info msg="RemovePodSandbox \"c4395a567a000d35f8a34fdfad2014c3a8cb5986effe956c8b05b61f73b86238\" returns successfully" Dec 13 01:28:27.438466 containerd[1465]: time="2024-12-13T01:28:27.438061022Z" level=info msg="StopPodSandbox for \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\"" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.494 [WARNING][4956] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6b02353-2595-4ed5-9b02-d81f11de016f", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26", Pod:"calico-apiserver-6f8dcc75bf-2tgk7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2ea773366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.494 [INFO][4956] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.494 [INFO][4956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" iface="eth0" netns="" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.494 [INFO][4956] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.494 [INFO][4956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.519 [INFO][4962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.519 [INFO][4962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.519 [INFO][4962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.528 [WARNING][4962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.529 [INFO][4962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.531 [INFO][4962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:27.538279 containerd[1465]: 2024-12-13 01:28:27.533 [INFO][4956] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.538279 containerd[1465]: time="2024-12-13T01:28:27.536561381Z" level=info msg="TearDown network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\" successfully" Dec 13 01:28:27.541283 containerd[1465]: time="2024-12-13T01:28:27.536605455Z" level=info msg="StopPodSandbox for \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\" returns successfully" Dec 13 01:28:27.541283 containerd[1465]: time="2024-12-13T01:28:27.540678961Z" level=info msg="RemovePodSandbox for \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\"" Dec 13 01:28:27.541283 containerd[1465]: time="2024-12-13T01:28:27.540744278Z" level=info msg="Forcibly stopping sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\"" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.593 [WARNING][4980] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6b02353-2595-4ed5-9b02-d81f11de016f", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"a9adcf4f48b06e5eb38106b8a12e8d7f2f982ec80327c894015be61c0a499f26", Pod:"calico-apiserver-6f8dcc75bf-2tgk7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2ea773366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.594 [INFO][4980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.594 [INFO][4980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" iface="eth0" netns="" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.594 [INFO][4980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.594 [INFO][4980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.631 [INFO][4986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.631 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.631 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.647 [WARNING][4986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.647 [INFO][4986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" HandleID="k8s-pod-network.7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--2tgk7-eth0" Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.651 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:27.657068 containerd[1465]: 2024-12-13 01:28:27.654 [INFO][4980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d" Dec 13 01:28:27.658206 containerd[1465]: time="2024-12-13T01:28:27.657121204Z" level=info msg="TearDown network for sandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\" successfully" Dec 13 01:28:27.665409 containerd[1465]: time="2024-12-13T01:28:27.664290459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:27.665409 containerd[1465]: time="2024-12-13T01:28:27.664396954Z" level=info msg="RemovePodSandbox \"7d6cb4d957053c28aeba10a5514b4fc0e42c2cd6f884f2713c177a160ad1f25d\" returns successfully" Dec 13 01:28:27.666376 containerd[1465]: time="2024-12-13T01:28:27.665947866Z" level=info msg="StopPodSandbox for \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\"" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.730 [WARNING][5005] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57", Pod:"calico-apiserver-6f8dcc75bf-xj2tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c61d2a1d5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.733 [INFO][5005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.733 [INFO][5005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" iface="eth0" netns="" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.735 [INFO][5005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.735 [INFO][5005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.764 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.765 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.765 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.776 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.776 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.779 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:27.783457 containerd[1465]: 2024-12-13 01:28:27.781 [INFO][5005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.784274 containerd[1465]: time="2024-12-13T01:28:27.783520998Z" level=info msg="TearDown network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\" successfully" Dec 13 01:28:27.784274 containerd[1465]: time="2024-12-13T01:28:27.783554173Z" level=info msg="StopPodSandbox for \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\" returns successfully" Dec 13 01:28:27.784274 containerd[1465]: time="2024-12-13T01:28:27.784173875Z" level=info msg="RemovePodSandbox for \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\"" Dec 13 01:28:27.784274 containerd[1465]: time="2024-12-13T01:28:27.784213478Z" level=info msg="Forcibly stopping sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\"" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.834 [WARNING][5029] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0", GenerateName:"calico-apiserver-6f8dcc75bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca0e53dc-8a33-4adf-906d-b0bf232eb9c0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dcc75bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"5b2f0f21e498dc6843c60bb053e95b44e73f90245e80bf8c1fc3b53ad7797a57", Pod:"calico-apiserver-6f8dcc75bf-xj2tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c61d2a1d5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.834 [INFO][5029] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.835 [INFO][5029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" iface="eth0" netns="" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.835 [INFO][5029] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.835 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.869 [INFO][5035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.869 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.869 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.878 [WARNING][5035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.878 [INFO][5035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" HandleID="k8s-pod-network.83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--apiserver--6f8dcc75bf--xj2tb-eth0" Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.880 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:27.884704 containerd[1465]: 2024-12-13 01:28:27.881 [INFO][5029] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071" Dec 13 01:28:27.884704 containerd[1465]: time="2024-12-13T01:28:27.884457113Z" level=info msg="TearDown network for sandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\" successfully" Dec 13 01:28:27.892585 containerd[1465]: time="2024-12-13T01:28:27.892507506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:27.892739 containerd[1465]: time="2024-12-13T01:28:27.892602201Z" level=info msg="RemovePodSandbox \"83fa5ceeecb5d5a7a49103a6a06bedb72688c58e728a08ffa3f67b166c53f071\" returns successfully" Dec 13 01:28:27.893225 containerd[1465]: time="2024-12-13T01:28:27.893192340Z" level=info msg="StopPodSandbox for \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\"" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:27.961 [WARNING][5053] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a545ca75-b4b0-41f8-ba2f-947389823539", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d", Pod:"csi-node-driver-bb5cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e74b99ec47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:27.963 [INFO][5053] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:27.963 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" iface="eth0" netns="" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:27.963 [INFO][5053] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:27.963 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:28.009 [INFO][5061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:28.009 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:28.010 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:28.025 [WARNING][5061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:28.025 [INFO][5061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:28.031 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:28.038507 containerd[1465]: 2024-12-13 01:28:28.035 [INFO][5053] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.038507 containerd[1465]: time="2024-12-13T01:28:28.038311755Z" level=info msg="TearDown network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\" successfully" Dec 13 01:28:28.038507 containerd[1465]: time="2024-12-13T01:28:28.038379157Z" level=info msg="StopPodSandbox for \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\" returns successfully" Dec 13 01:28:28.041495 containerd[1465]: time="2024-12-13T01:28:28.040309237Z" level=info msg="RemovePodSandbox for \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\"" Dec 13 01:28:28.041495 containerd[1465]: time="2024-12-13T01:28:28.040388078Z" level=info msg="Forcibly stopping sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\"" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.100 [WARNING][5079] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a545ca75-b4b0-41f8-ba2f-947389823539", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"17f33a92a1048112ee27bbd92c411a59438dd28f4ffca6f9be78beb667521e1d", Pod:"csi-node-driver-bb5cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e74b99ec47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.100 [INFO][5079] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.100 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" iface="eth0" netns="" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.100 [INFO][5079] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.100 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.131 [INFO][5085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.131 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.131 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.139 [WARNING][5085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.139 [INFO][5085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" HandleID="k8s-pod-network.c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-csi--node--driver--bb5cr-eth0" Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.140 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:28.144590 containerd[1465]: 2024-12-13 01:28:28.142 [INFO][5079] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6" Dec 13 01:28:28.144590 containerd[1465]: time="2024-12-13T01:28:28.143430190Z" level=info msg="TearDown network for sandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\" successfully" Dec 13 01:28:28.148877 containerd[1465]: time="2024-12-13T01:28:28.148828292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:28.149075 containerd[1465]: time="2024-12-13T01:28:28.148917758Z" level=info msg="RemovePodSandbox \"c3058c26af150fd4914916e02a04bfd55591ed10f4eba4b041cd4997b47371e6\" returns successfully" Dec 13 01:28:28.150124 containerd[1465]: time="2024-12-13T01:28:28.149644139Z" level=info msg="StopPodSandbox for \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\"" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.204 [WARNING][5104] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282", Pod:"coredns-6f6b679f8f-p55lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082953b4e79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.205 [INFO][5104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.205 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" iface="eth0" netns="" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.205 [INFO][5104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.205 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.235 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.236 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.236 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.244 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.244 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.247 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:28.251027 containerd[1465]: 2024-12-13 01:28:28.249 [INFO][5104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.253068 containerd[1465]: time="2024-12-13T01:28:28.251082443Z" level=info msg="TearDown network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\" successfully" Dec 13 01:28:28.253068 containerd[1465]: time="2024-12-13T01:28:28.251151487Z" level=info msg="StopPodSandbox for \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\" returns successfully" Dec 13 01:28:28.253068 containerd[1465]: time="2024-12-13T01:28:28.251895248Z" level=info msg="RemovePodSandbox for \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\"" Dec 13 01:28:28.253068 containerd[1465]: time="2024-12-13T01:28:28.251933272Z" level=info msg="Forcibly stopping sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\"" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.302 [WARNING][5128] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2b53d76-5bdc-4901-a9e7-dfdb2cc2545b", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"9b22eed2608e0b2b421224bb646d58f4f9ad58f4e0daca2de97e37449dff2282", Pod:"coredns-6f6b679f8f-p55lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082953b4e79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.302 [INFO][5128] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.302 [INFO][5128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" iface="eth0" netns="" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.302 [INFO][5128] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.302 [INFO][5128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.328 [INFO][5135] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.328 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.328 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.339 [WARNING][5135] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.340 [INFO][5135] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" HandleID="k8s-pod-network.e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--p55lg-eth0" Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.341 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:28.344637 containerd[1465]: 2024-12-13 01:28:28.343 [INFO][5128] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e" Dec 13 01:28:28.345895 containerd[1465]: time="2024-12-13T01:28:28.344676601Z" level=info msg="TearDown network for sandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\" successfully" Dec 13 01:28:28.349731 containerd[1465]: time="2024-12-13T01:28:28.349483357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:28.349731 containerd[1465]: time="2024-12-13T01:28:28.349595771Z" level=info msg="RemovePodSandbox \"e809ff33503f25393ad03c7a268291e2b837ed15e68a69d1f03f4c8290f1ea8e\" returns successfully" Dec 13 01:28:28.350239 containerd[1465]: time="2024-12-13T01:28:28.350188395Z" level=info msg="StopPodSandbox for \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\"" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.403 [WARNING][5153] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0", GenerateName:"calico-kube-controllers-5f9796d998-", Namespace:"calico-system", SelfLink:"", UID:"c78149af-1946-4fbc-9d93-badab5c4fb43", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9796d998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c", Pod:"calico-kube-controllers-5f9796d998-b2p2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaec76748f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.403 [INFO][5153] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.404 [INFO][5153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" iface="eth0" netns="" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.404 [INFO][5153] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.404 [INFO][5153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.437 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.438 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.438 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.447 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.447 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.450 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:28.454502 containerd[1465]: 2024-12-13 01:28:28.452 [INFO][5153] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.457124 containerd[1465]: time="2024-12-13T01:28:28.454646295Z" level=info msg="TearDown network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\" successfully" Dec 13 01:28:28.457124 containerd[1465]: time="2024-12-13T01:28:28.454684677Z" level=info msg="StopPodSandbox for \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\" returns successfully" Dec 13 01:28:28.457124 containerd[1465]: time="2024-12-13T01:28:28.455881976Z" level=info msg="RemovePodSandbox for \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\"" Dec 13 01:28:28.457124 containerd[1465]: time="2024-12-13T01:28:28.455922267Z" level=info msg="Forcibly stopping sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\"" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.518 [WARNING][5177] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0", GenerateName:"calico-kube-controllers-5f9796d998-", Namespace:"calico-system", SelfLink:"", UID:"c78149af-1946-4fbc-9d93-badab5c4fb43", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9796d998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-a4c395b8c3820a7aa3e5.c.flatcar-212911.internal", ContainerID:"acd6e2f038256af05e85b4914ab1d68bd59c743338d6d1e54644a5d07ee6438c", Pod:"calico-kube-controllers-5f9796d998-b2p2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaec76748f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.518 [INFO][5177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.519 [INFO][5177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" iface="eth0" netns="" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.519 [INFO][5177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.519 [INFO][5177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.554 [INFO][5183] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.554 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.554 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.564 [WARNING][5183] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.564 [INFO][5183] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" HandleID="k8s-pod-network.ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Workload="ci--4081--2--1--a4c395b8c3820a7aa3e5.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f9796d998--b2p2w-eth0" Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.569 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:28.571978 containerd[1465]: 2024-12-13 01:28:28.570 [INFO][5177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2" Dec 13 01:28:28.572820 containerd[1465]: time="2024-12-13T01:28:28.572022510Z" level=info msg="TearDown network for sandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\" successfully" Dec 13 01:28:28.577795 containerd[1465]: time="2024-12-13T01:28:28.577273568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:28.577795 containerd[1465]: time="2024-12-13T01:28:28.577520287Z" level=info msg="RemovePodSandbox \"ac355be0314bcc7212f910d3e42e112efee7cfada879b67a6e0b7e28d69055c2\" returns successfully" Dec 13 01:28:28.725812 systemd[1]: Started sshd@8-10.128.0.34:22-147.75.109.163:39482.service - OpenSSH per-connection server daemon (147.75.109.163:39482). Dec 13 01:28:29.022261 sshd[5190]: Accepted publickey for core from 147.75.109.163 port 39482 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:29.025248 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:29.032889 systemd-logind[1452]: New session 9 of user core. Dec 13 01:28:29.040560 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:28:29.335580 sshd[5190]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:29.340215 systemd[1]: sshd@8-10.128.0.34:22-147.75.109.163:39482.service: Deactivated successfully. Dec 13 01:28:29.343308 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:28:29.345612 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:28:29.347834 systemd-logind[1452]: Removed session 9. Dec 13 01:28:34.393814 systemd[1]: Started sshd@9-10.128.0.34:22-147.75.109.163:39494.service - OpenSSH per-connection server daemon (147.75.109.163:39494). Dec 13 01:28:34.687049 sshd[5204]: Accepted publickey for core from 147.75.109.163 port 39494 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:34.689131 sshd[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:34.695422 systemd-logind[1452]: New session 10 of user core. Dec 13 01:28:34.705592 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:28:34.976653 sshd[5204]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:34.983016 systemd[1]: sshd@9-10.128.0.34:22-147.75.109.163:39494.service: Deactivated successfully. Dec 13 01:28:34.985733 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:28:34.987133 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:28:34.988726 systemd-logind[1452]: Removed session 10. Dec 13 01:28:35.036727 systemd[1]: Started sshd@10-10.128.0.34:22-147.75.109.163:39498.service - OpenSSH per-connection server daemon (147.75.109.163:39498). Dec 13 01:28:35.329326 sshd[5220]: Accepted publickey for core from 147.75.109.163 port 39498 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:35.331376 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:35.339072 systemd-logind[1452]: New session 11 of user core. Dec 13 01:28:35.343688 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:28:35.666258 sshd[5220]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:35.671886 systemd[1]: sshd@10-10.128.0.34:22-147.75.109.163:39498.service: Deactivated successfully. Dec 13 01:28:35.675328 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:28:35.677430 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:28:35.679280 systemd-logind[1452]: Removed session 11. Dec 13 01:28:35.723718 systemd[1]: Started sshd@11-10.128.0.34:22-147.75.109.163:39508.service - OpenSSH per-connection server daemon (147.75.109.163:39508). Dec 13 01:28:36.027310 sshd[5231]: Accepted publickey for core from 147.75.109.163 port 39508 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:36.029324 sshd[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:36.034973 systemd-logind[1452]: New session 12 of user core. Dec 13 01:28:36.045552 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:28:36.325200 sshd[5231]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:36.329981 systemd[1]: sshd@11-10.128.0.34:22-147.75.109.163:39508.service: Deactivated successfully. Dec 13 01:28:36.332662 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:28:36.334769 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:28:36.336253 systemd-logind[1452]: Removed session 12. Dec 13 01:28:41.382725 systemd[1]: Started sshd@12-10.128.0.34:22-147.75.109.163:52642.service - OpenSSH per-connection server daemon (147.75.109.163:52642). Dec 13 01:28:41.684549 sshd[5246]: Accepted publickey for core from 147.75.109.163 port 52642 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:41.687963 sshd[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:41.696680 systemd-logind[1452]: New session 13 of user core. Dec 13 01:28:41.701584 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:28:41.991078 sshd[5246]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:41.999577 systemd[1]: sshd@12-10.128.0.34:22-147.75.109.163:52642.service: Deactivated successfully. Dec 13 01:28:42.002139 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:28:42.003514 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:28:42.005046 systemd-logind[1452]: Removed session 13. Dec 13 01:28:47.047006 systemd[1]: Started sshd@13-10.128.0.34:22-147.75.109.163:59456.service - OpenSSH per-connection server daemon (147.75.109.163:59456). Dec 13 01:28:47.131799 systemd[1]: run-containerd-runc-k8s.io-3cd24432176c098cd4e62471eac3c4c6e2956ad7152f32044bb4c5e5c7e5b936-runc.kSzBGq.mount: Deactivated successfully. Dec 13 01:28:47.337016 sshd[5292]: Accepted publickey for core from 147.75.109.163 port 59456 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:47.340302 sshd[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:47.350161 systemd-logind[1452]: New session 14 of user core. Dec 13 01:28:47.355003 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:28:47.644713 sshd[5292]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:47.654167 systemd[1]: sshd@13-10.128.0.34:22-147.75.109.163:59456.service: Deactivated successfully. Dec 13 01:28:47.658562 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:28:47.660795 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:28:47.662374 systemd-logind[1452]: Removed session 14. Dec 13 01:28:52.701768 systemd[1]: Started sshd@14-10.128.0.34:22-147.75.109.163:59464.service - OpenSSH per-connection server daemon (147.75.109.163:59464). Dec 13 01:28:52.989723 sshd[5326]: Accepted publickey for core from 147.75.109.163 port 59464 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:52.991797 sshd[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:52.998198 systemd-logind[1452]: New session 15 of user core. Dec 13 01:28:53.005605 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:28:53.290370 sshd[5326]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:53.295564 systemd[1]: sshd@14-10.128.0.34:22-147.75.109.163:59464.service: Deactivated successfully. Dec 13 01:28:53.298676 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:28:53.301151 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:28:53.303571 systemd-logind[1452]: Removed session 15. Dec 13 01:28:58.345731 systemd[1]: Started sshd@15-10.128.0.34:22-147.75.109.163:56468.service - OpenSSH per-connection server daemon (147.75.109.163:56468). Dec 13 01:28:58.629290 sshd[5340]: Accepted publickey for core from 147.75.109.163 port 56468 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:58.631235 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:58.638198 systemd-logind[1452]: New session 16 of user core. Dec 13 01:28:58.643638 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:28:58.923632 sshd[5340]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:58.929304 systemd[1]: sshd@15-10.128.0.34:22-147.75.109.163:56468.service: Deactivated successfully. Dec 13 01:28:58.932248 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:28:58.933407 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:28:58.934996 systemd-logind[1452]: Removed session 16. Dec 13 01:28:58.984741 systemd[1]: Started sshd@16-10.128.0.34:22-147.75.109.163:56470.service - OpenSSH per-connection server daemon (147.75.109.163:56470). Dec 13 01:28:59.267857 sshd[5352]: Accepted publickey for core from 147.75.109.163 port 56470 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:59.269787 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:59.276765 systemd-logind[1452]: New session 17 of user core. Dec 13 01:28:59.280543 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:28:59.648728 sshd[5352]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:59.653051 systemd[1]: sshd@16-10.128.0.34:22-147.75.109.163:56470.service: Deactivated successfully. Dec 13 01:28:59.655993 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:28:59.658078 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:28:59.660116 systemd-logind[1452]: Removed session 17. Dec 13 01:28:59.702754 systemd[1]: Started sshd@17-10.128.0.34:22-147.75.109.163:56480.service - OpenSSH per-connection server daemon (147.75.109.163:56480). Dec 13 01:28:59.991378 sshd[5363]: Accepted publickey for core from 147.75.109.163 port 56480 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:59.993255 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:59.999808 systemd-logind[1452]: New session 18 of user core. Dec 13 01:29:00.003554 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:29:02.200622 sshd[5363]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:02.207860 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:29:02.209117 systemd[1]: sshd@17-10.128.0.34:22-147.75.109.163:56480.service: Deactivated successfully. Dec 13 01:29:02.212116 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:29:02.214541 systemd-logind[1452]: Removed session 18. Dec 13 01:29:02.251687 systemd[1]: Started sshd@18-10.128.0.34:22-147.75.109.163:56496.service - OpenSSH per-connection server daemon (147.75.109.163:56496). Dec 13 01:29:02.540735 sshd[5381]: Accepted publickey for core from 147.75.109.163 port 56496 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:29:02.543536 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:02.551637 systemd-logind[1452]: New session 19 of user core. Dec 13 01:29:02.558585 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:29:02.963520 sshd[5381]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:02.968987 systemd[1]: sshd@18-10.128.0.34:22-147.75.109.163:56496.service: Deactivated successfully. Dec 13 01:29:02.971775 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:29:02.972815 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:29:02.974536 systemd-logind[1452]: Removed session 19. Dec 13 01:29:03.022878 systemd[1]: Started sshd@19-10.128.0.34:22-147.75.109.163:56502.service - OpenSSH per-connection server daemon (147.75.109.163:56502). Dec 13 01:29:03.312573 sshd[5392]: Accepted publickey for core from 147.75.109.163 port 56502 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:29:03.314561 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:03.320112 systemd-logind[1452]: New session 20 of user core. Dec 13 01:29:03.331578 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:29:03.603175 sshd[5392]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:03.609092 systemd[1]: sshd@19-10.128.0.34:22-147.75.109.163:56502.service: Deactivated successfully. Dec 13 01:29:03.612182 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:29:03.613651 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:29:03.615550 systemd-logind[1452]: Removed session 20. Dec 13 01:29:08.663755 systemd[1]: Started sshd@20-10.128.0.34:22-147.75.109.163:42546.service - OpenSSH per-connection server daemon (147.75.109.163:42546). Dec 13 01:29:08.960860 sshd[5407]: Accepted publickey for core from 147.75.109.163 port 42546 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:29:08.962950 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:08.969573 systemd-logind[1452]: New session 21 of user core. Dec 13 01:29:08.976575 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:29:09.254640 sshd[5407]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:09.263738 systemd[1]: sshd@20-10.128.0.34:22-147.75.109.163:42546.service: Deactivated successfully. Dec 13 01:29:09.269638 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:29:09.276604 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:29:09.280838 systemd-logind[1452]: Removed session 21. Dec 13 01:29:12.165437 systemd[1]: run-containerd-runc-k8s.io-65814a5692f03436ffee4bf7bd97ab335dad7299d294727a83d7704967ae88a0-runc.k8PAkP.mount: Deactivated successfully. Dec 13 01:29:14.307836 systemd[1]: Started sshd@21-10.128.0.34:22-147.75.109.163:42562.service - OpenSSH per-connection server daemon (147.75.109.163:42562). Dec 13 01:29:14.602767 sshd[5447]: Accepted publickey for core from 147.75.109.163 port 42562 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:29:14.604721 sshd[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:14.610488 systemd-logind[1452]: New session 22 of user core. Dec 13 01:29:14.618614 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:29:14.894314 sshd[5447]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:14.900016 systemd[1]: sshd@21-10.128.0.34:22-147.75.109.163:42562.service: Deactivated successfully. Dec 13 01:29:14.903331 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:29:14.904908 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:29:14.906668 systemd-logind[1452]: Removed session 22. Dec 13 01:29:18.668336 systemd[1]: run-containerd-runc-k8s.io-3cd24432176c098cd4e62471eac3c4c6e2956ad7152f32044bb4c5e5c7e5b936-runc.8rTDJ3.mount: Deactivated successfully. Dec 13 01:29:19.953932 systemd[1]: Started sshd@22-10.128.0.34:22-147.75.109.163:48496.service - OpenSSH per-connection server daemon (147.75.109.163:48496). Dec 13 01:29:20.253518 sshd[5495]: Accepted publickey for core from 147.75.109.163 port 48496 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:29:20.255465 sshd[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:20.262234 systemd-logind[1452]: New session 23 of user core. Dec 13 01:29:20.266747 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:29:20.542279 sshd[5495]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:20.549124 systemd[1]: sshd@22-10.128.0.34:22-147.75.109.163:48496.service: Deactivated successfully. Dec 13 01:29:20.554027 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:29:20.555663 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:29:20.558814 systemd-logind[1452]: Removed session 23. Dec 13 01:29:25.600389 systemd[1]: Started sshd@23-10.128.0.34:22-147.75.109.163:48504.service - OpenSSH per-connection server daemon (147.75.109.163:48504). Dec 13 01:29:25.911504 sshd[5514]: Accepted publickey for core from 147.75.109.163 port 48504 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:29:25.914103 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:25.924251 systemd-logind[1452]: New session 24 of user core. Dec 13 01:29:25.934690 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:29:26.238934 sshd[5514]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:26.251526 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:29:26.252465 systemd[1]: sshd@23-10.128.0.34:22-147.75.109.163:48504.service: Deactivated successfully. Dec 13 01:29:26.256839 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:29:26.262103 systemd-logind[1452]: Removed session 24.