Jan 15 12:48:45.364819 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:48:45.364843 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:48:45.364852 kernel: KASLR enabled Jan 15 12:48:45.364858 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:48:45.364865 kernel: printk: bootconsole [pl11] enabled Jan 15 12:48:45.364871 kernel: efi: EFI v2.7 by EDK II Jan 15 12:48:45.364878 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:48:45.364884 kernel: random: crng init done Jan 15 12:48:45.364890 kernel: ACPI: Early table checksum verification disabled Jan 15 12:48:45.364896 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:48:45.364903 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364909 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364917 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:48:45.364923 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364931 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364937 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364944 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364952 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364958 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364965 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:48:45.364971 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364977 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:48:45.364984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:48:45.364990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:48:45.364997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:48:45.365003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:48:45.365009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:48:45.365016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:48:45.365024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:48:45.365030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:48:45.365037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:48:45.365043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:48:45.365049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:48:45.365056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:48:45.365063 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 15 12:48:45.365069 kernel: Zone ranges: Jan 15 12:48:45.365076 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:48:45.365082 kernel: DMA32 empty Jan 15 12:48:45.365088 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:45.365095 kernel: Movable zone start for each node Jan 15 12:48:45.365106 kernel: Early memory node ranges Jan 15 12:48:45.365113 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:48:45.365120 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:48:45.365127 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:48:45.365134 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:48:45.365143 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:48:45.365150 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:48:45.365169 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:45.365176 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:48:45.365183 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:48:45.365190 kernel: psci: probing for conduit method from ACPI. Jan 15 12:48:45.365197 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:48:45.365204 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:48:45.365211 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:48:45.365218 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:48:45.365225 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:48:45.365232 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:48:45.365240 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:48:45.365247 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:48:45.365254 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:48:45.365261 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:48:45.365269 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:48:45.365276 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:48:45.365283 kernel: CPU features: detected: Spectre-BHB Jan 15 12:48:45.365289 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:48:45.365297 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:48:45.365303 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:48:45.365310 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:48:45.365319 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:48:45.365326 kernel: alternatives: applying boot alternatives Jan 15 12:48:45.365334 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:45.365341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:48:45.365348 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:48:45.365355 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:48:45.365361 kernel: Fallback order for Node 0: 0 Jan 15 12:48:45.365368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:48:45.365375 kernel: Policy zone: Normal Jan 15 12:48:45.365382 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:48:45.365389 kernel: software IO TLB: area num 2. Jan 15 12:48:45.365398 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:48:45.365405 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 15 12:48:45.365412 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:48:45.365419 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:48:45.365426 kernel: rcu: RCU event tracing is enabled. Jan 15 12:48:45.365433 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:48:45.365440 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:48:45.365447 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:48:45.365454 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:48:45.365461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:48:45.365467 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:48:45.365476 kernel: GICv3: 960 SPIs implemented Jan 15 12:48:45.365483 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:48:45.365489 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:48:45.365496 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:48:45.365503 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:48:45.365510 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:48:45.365517 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:48:45.365524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:45.365557 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:48:45.365564 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:48:45.365571 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:48:45.365581 kernel: Console: colour dummy device 80x25 Jan 15 12:48:45.365588 kernel: printk: console [tty1] enabled Jan 15 12:48:45.365595 kernel: ACPI: Core revision 20230628 Jan 15 12:48:45.365602 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:48:45.365609 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:48:45.365616 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:48:45.365623 kernel: landlock: Up and running. Jan 15 12:48:45.365630 kernel: SELinux: Initializing. Jan 15 12:48:45.365637 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.365644 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.365653 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:45.365660 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:45.365667 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:48:45.365674 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:48:45.365681 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:48:45.365688 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:48:45.365695 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:48:45.365709 kernel: Remapping and enabling EFI services. Jan 15 12:48:45.365717 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:48:45.365724 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:48:45.365731 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:48:45.365740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:45.365748 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:48:45.365755 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:48:45.365763 kernel: SMP: Total of 2 processors activated. Jan 15 12:48:45.365770 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:48:45.365779 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:48:45.365786 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:48:45.365794 kernel: CPU features: detected: CRC32 instructions Jan 15 12:48:45.365801 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:48:45.365809 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:48:45.365816 kernel: CPU features: detected: Privileged Access Never Jan 15 12:48:45.365823 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:48:45.365831 kernel: alternatives: applying system-wide alternatives Jan 15 12:48:45.365838 kernel: devtmpfs: initialized Jan 15 12:48:45.365847 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:48:45.365854 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:48:45.365862 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:48:45.365869 kernel: SMBIOS 3.1.0 present. Jan 15 12:48:45.365877 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:48:45.365884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:48:45.365892 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:48:45.365899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:48:45.365907 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:48:45.365916 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:48:45.365923 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:48:45.365930 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:48:45.365938 kernel: cpuidle: using governor menu Jan 15 12:48:45.365945 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:48:45.365953 kernel: ASID allocator initialised with 32768 entries Jan 15 12:48:45.365960 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:48:45.365968 kernel: Serial: AMBA PL011 UART driver Jan 15 12:48:45.365975 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:48:45.365984 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:48:45.365992 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:48:45.365999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:48:45.366006 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:48:45.366014 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:48:45.366021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:48:45.366029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:48:45.366036 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:48:45.366043 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:48:45.366052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:48:45.366059 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:48:45.366067 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:48:45.366074 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:48:45.366082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:48:45.366089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:48:45.366096 kernel: ACPI: Interpreter enabled Jan 15 12:48:45.366103 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:48:45.366111 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:48:45.366120 kernel: printk: console [ttyAMA0] enabled Jan 15 12:48:45.366127 kernel: printk: bootconsole [pl11] disabled Jan 15 12:48:45.366135 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:48:45.366142 kernel: iommu: Default domain type: Translated Jan 15 12:48:45.366149 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:48:45.366157 kernel: efivars: Registered efivars operations Jan 15 12:48:45.366164 kernel: vgaarb: loaded Jan 15 12:48:45.366171 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:48:45.366178 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:48:45.366187 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:48:45.366195 kernel: pnp: PnP ACPI init Jan 15 12:48:45.366202 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:48:45.366209 kernel: NET: Registered PF_INET protocol family Jan 15 12:48:45.366217 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:48:45.366224 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:48:45.366232 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:48:45.366239 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:48:45.366252 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:48:45.366261 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:48:45.366269 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.366276 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.366283 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:48:45.366291 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:48:45.366298 kernel: kvm [1]: HYP mode not available Jan 15 12:48:45.366306 kernel: Initialise system trusted keyrings Jan 15 12:48:45.366313 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:48:45.366321 kernel: Key type asymmetric registered Jan 15 12:48:45.366329 kernel: Asymmetric key parser 'x509' registered Jan 15 12:48:45.366337 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:48:45.366344 kernel: io scheduler mq-deadline registered Jan 15 12:48:45.366351 kernel: io scheduler kyber registered Jan 15 12:48:45.366359 kernel: io scheduler bfq registered Jan 15 12:48:45.366366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:48:45.366374 kernel: thunder_xcv, ver 1.0 Jan 15 12:48:45.366381 kernel: thunder_bgx, ver 1.0 Jan 15 12:48:45.366389 kernel: nicpf, ver 1.0 Jan 15 12:48:45.366396 kernel: nicvf, ver 1.0 Jan 15 12:48:45.366565 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:48:45.366647 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:48:44 UTC (1736945324) Jan 15 12:48:45.366658 kernel: efifb: probing for efifb Jan 15 12:48:45.366666 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:48:45.366673 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:48:45.366681 kernel: efifb: scrolling: redraw Jan 15 12:48:45.366688 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:48:45.366698 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:48:45.366706 kernel: fb0: EFI VGA frame buffer device Jan 15 12:48:45.366713 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:48:45.366721 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:48:45.366728 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:48:45.366735 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:48:45.366743 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:48:45.366750 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:48:45.366758 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:48:45.366767 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:48:45.366774 kernel: Segment Routing with IPv6 Jan 15 12:48:45.366782 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:48:45.366789 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:48:45.366796 kernel: Key type dns_resolver registered Jan 15 12:48:45.366804 kernel: registered taskstats version 1 Jan 15 12:48:45.366811 kernel: Loading compiled-in X.509 certificates Jan 15 12:48:45.366818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:48:45.366826 kernel: Key type .fscrypt registered Jan 15 12:48:45.366848 kernel: Key type fscrypt-provisioning registered Jan 15 12:48:45.366856 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:48:45.366863 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:48:45.366871 kernel: ima: No architecture policies found Jan 15 12:48:45.366878 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:48:45.366886 kernel: clk: Disabling unused clocks Jan 15 12:48:45.366893 kernel: Freeing unused kernel memory: 39360K Jan 15 12:48:45.366901 kernel: Run /init as init process Jan 15 12:48:45.366908 kernel: with arguments: Jan 15 12:48:45.366918 kernel: /init Jan 15 12:48:45.366925 kernel: with environment: Jan 15 12:48:45.366932 kernel: HOME=/ Jan 15 12:48:45.366939 kernel: TERM=linux Jan 15 12:48:45.366947 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:48:45.366956 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:48:45.366966 systemd[1]: Detected virtualization microsoft. Jan 15 12:48:45.366974 systemd[1]: Detected architecture arm64. Jan 15 12:48:45.366983 systemd[1]: Running in initrd. Jan 15 12:48:45.366991 systemd[1]: No hostname configured, using default hostname. Jan 15 12:48:45.366999 systemd[1]: Hostname set to . Jan 15 12:48:45.367007 systemd[1]: Initializing machine ID from random generator. Jan 15 12:48:45.367015 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:48:45.367023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:45.367031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:45.367040 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:48:45.367049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:48:45.367057 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:48:45.367066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:48:45.367075 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:48:45.367083 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:48:45.367091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:45.367099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:45.367109 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:48:45.367117 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:48:45.367125 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:48:45.367133 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:48:45.367141 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:45.367149 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:45.367157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:48:45.367165 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:48:45.367175 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:45.367183 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:45.367191 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:45.367199 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:48:45.367207 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:48:45.367215 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:48:45.367223 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:48:45.367231 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:48:45.367239 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:48:45.367249 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:48:45.367271 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:48:45.367290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.367299 systemd-journald[217]: Journal started Jan 15 12:48:45.367320 systemd-journald[217]: Runtime Journal (/run/log/journal/78588ac9abaf43128b6eca2c016b2dbd) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:48:45.373982 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:48:45.405381 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:48:45.405439 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:48:45.394197 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:45.411879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:45.431119 kernel: Bridge firewalling registered Jan 15 12:48:45.430271 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:48:45.440973 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:48:45.450345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:45.461169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.486851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:45.501576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:48:45.514704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:48:45.532644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:48:45.557364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:45.564442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:48:45.577634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:45.592958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:45.617765 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:48:45.626009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:48:45.645637 dracut-cmdline[250]: dracut-dracut-053 Jan 15 12:48:45.654750 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:45.647682 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:48:45.712599 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:45.714870 systemd-resolved[251]: Positive Trust Anchors: Jan 15 12:48:45.714880 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:48:45.714912 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:48:45.717123 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 15 12:48:45.722403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:48:45.736942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:45.868566 kernel: SCSI subsystem initialized Jan 15 12:48:45.876544 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:48:45.887549 kernel: iscsi: registered transport (tcp) Jan 15 12:48:45.905418 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:48:45.905457 kernel: QLogic iSCSI HBA Driver Jan 15 12:48:45.946165 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:45.963790 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:48:45.991814 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:48:45.991852 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:48:45.998263 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:48:46.047549 kernel: raid6: neonx8 gen() 15742 MB/s Jan 15 12:48:46.067541 kernel: raid6: neonx4 gen() 15662 MB/s Jan 15 12:48:46.087544 kernel: raid6: neonx2 gen() 13236 MB/s Jan 15 12:48:46.108544 kernel: raid6: neonx1 gen() 10486 MB/s Jan 15 12:48:46.128545 kernel: raid6: int64x8 gen() 6960 MB/s Jan 15 12:48:46.148539 kernel: raid6: int64x4 gen() 7349 MB/s Jan 15 12:48:46.169559 kernel: raid6: int64x2 gen() 6131 MB/s Jan 15 12:48:46.193189 kernel: raid6: int64x1 gen() 5061 MB/s Jan 15 12:48:46.193216 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Jan 15 12:48:46.217281 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Jan 15 12:48:46.217294 kernel: raid6: using neon recovery algorithm Jan 15 12:48:46.230017 kernel: xor: measuring software checksum speed Jan 15 12:48:46.230035 kernel: 8regs : 19797 MB/sec Jan 15 12:48:46.233729 kernel: 32regs : 19646 MB/sec Jan 15 12:48:46.237296 kernel: arm64_neon : 26954 MB/sec Jan 15 12:48:46.241664 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 15 12:48:46.293689 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:48:46.303031 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:46.319707 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:46.350805 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 15 12:48:46.356699 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:46.376663 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:48:46.410274 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 15 12:48:46.440658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:46.455940 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:48:46.493926 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:46.511693 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:48:46.547644 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:46.561771 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:46.577000 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:46.586839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:48:46.622402 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:48:46.617705 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:48:46.628766 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:46.628891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:46.706433 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:48:46.706462 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:48:46.706472 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:48:46.706482 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:48:46.706501 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:48:46.706511 kernel: PTP clock support registered Jan 15 12:48:46.706521 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:48:46.717780 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:48:46.644486 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:46.753788 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:48:46.753818 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:48:46.669311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:46.778321 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:48:46.778348 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: VF slot 1 added Jan 15 12:48:46.778510 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:48:46.778542 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:48:46.669518 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.334854 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:48:47.334878 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:48:47.334889 kernel: scsi host0: storvsc_host_t Jan 15 12:48:47.335097 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:48:47.335125 kernel: scsi host1: storvsc_host_t Jan 15 12:48:47.335256 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:48:46.739652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:46.778882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:47.298970 systemd-resolved[251]: Clock change detected. Flushing caches. Jan 15 12:48:47.320515 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:47.341780 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.355732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:47.355789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.426673 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:48:47.426699 kernel: hv_pci d5ef9fb2-8203-45de-a683-61b20ea5d1b8: PCI VMBus probing: Using version 0x10004 Jan 15 12:48:47.557622 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 12:48:47.557820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:48:47.557840 kernel: hv_pci d5ef9fb2-8203-45de-a683-61b20ea5d1b8: PCI host bridge to bus 8203:00 Jan 15 12:48:47.557934 kernel: pci_bus 8203:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:48:47.558042 kernel: pci_bus 8203:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:48:47.558128 kernel: pci 8203:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:48:47.558247 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:48:47.558357 kernel: pci 8203:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:47.558453 kernel: pci 8203:00:02.0: enabling Extended Tags Jan 15 12:48:47.558539 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:48:47.558632 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:48:47.558748 kernel: pci 8203:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8203:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:48:47.558856 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 12:48:47.558967 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:48:47.559074 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:48:47.559177 kernel: pci_bus 8203:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:48:47.559267 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:47.559277 kernel: pci 8203:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:47.559364 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 12:48:47.363964 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:47.389483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:47.474267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.487958 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:47.582444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:47.624736 kernel: mlx5_core 8203:00:02.0: enabling device (0000 -> 0002) Jan 15 12:48:47.843745 kernel: mlx5_core 8203:00:02.0: firmware version: 16.30.1284 Jan 15 12:48:47.843961 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: VF registering: eth1 Jan 15 12:48:47.844487 kernel: mlx5_core 8203:00:02.0 eth1: joined to eth0 Jan 15 12:48:47.844608 kernel: mlx5_core 8203:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:48:47.852769 kernel: mlx5_core 8203:00:02.0 enP33283s1: renamed from eth1 Jan 15 12:48:48.098598 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:48:48.171768 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (500) Jan 15 12:48:48.185799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:48:48.239047 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (483) Jan 15 12:48:48.252691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:48:48.261033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:48:48.287318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:48:48.309961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:48:48.338319 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:48.344822 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:49.354060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:49.354115 disk-uuid[608]: The operation has completed successfully. Jan 15 12:48:49.422033 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:48:49.423917 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:48:49.453888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:48:49.468016 sh[694]: Success Jan 15 12:48:49.508761 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:48:49.694000 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:48:49.723851 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:48:49.735460 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:48:49.770826 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:48:49.770880 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:49.777889 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:48:49.783199 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:48:49.787421 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:48:50.050552 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:48:50.056118 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:48:50.076988 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:48:50.091307 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:48:50.124136 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:50.124157 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:50.124167 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:50.145796 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:50.153857 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:48:50.167792 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:50.174209 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:48:50.186981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:48:50.240709 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:50.264938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:48:50.288323 systemd-networkd[878]: lo: Link UP Jan 15 12:48:50.288336 systemd-networkd[878]: lo: Gained carrier Jan 15 12:48:50.289988 systemd-networkd[878]: Enumeration completed Jan 15 12:48:50.290608 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:50.290611 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:48:50.292588 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:48:50.302912 systemd[1]: Reached target network.target - Network. Jan 15 12:48:50.374735 kernel: mlx5_core 8203:00:02.0 enP33283s1: Link up Jan 15 12:48:50.412734 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: Data path switched to VF: enP33283s1 Jan 15 12:48:50.413511 systemd-networkd[878]: enP33283s1: Link UP Jan 15 12:48:50.413623 systemd-networkd[878]: eth0: Link UP Jan 15 12:48:50.413781 systemd-networkd[878]: eth0: Gained carrier Jan 15 12:48:50.413789 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:50.438239 systemd-networkd[878]: enP33283s1: Gained carrier Jan 15 12:48:50.452784 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:48:51.176810 ignition[820]: Ignition 2.19.0 Jan 15 12:48:51.176822 ignition[820]: Stage: fetch-offline Jan 15 12:48:51.180504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:51.176862 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.193963 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:48:51.176871 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.177000 ignition[820]: parsed url from cmdline: "" Jan 15 12:48:51.177003 ignition[820]: no config URL provided Jan 15 12:48:51.177007 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:51.177015 ignition[820]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:51.177020 ignition[820]: failed to fetch config: resource requires networking Jan 15 12:48:51.177241 ignition[820]: Ignition finished successfully Jan 15 12:48:51.225375 ignition[886]: Ignition 2.19.0 Jan 15 12:48:51.225381 ignition[886]: Stage: fetch Jan 15 12:48:51.225552 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.225562 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.225664 ignition[886]: parsed url from cmdline: "" Jan 15 12:48:51.225667 ignition[886]: no config URL provided Jan 15 12:48:51.225672 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:51.225680 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:51.225700 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:48:51.345967 ignition[886]: GET result: OK Jan 15 12:48:51.346623 ignition[886]: config has been read from IMDS userdata Jan 15 12:48:51.346663 ignition[886]: parsing config with SHA512: 85324a30c14f99b17a0d5a6404cea2fca433012ddf4458496437231b978ab890755feef76179084e81ebcf89a4ae724ce3088e9b28ba043daa502e4ee6e24f57 Jan 15 12:48:51.350928 unknown[886]: fetched base config from "system" Jan 15 12:48:51.351294 ignition[886]: fetch: fetch complete Jan 15 12:48:51.350936 unknown[886]: fetched base config from "system" Jan 15 12:48:51.351298 ignition[886]: fetch: fetch passed Jan 15 12:48:51.350941 unknown[886]: fetched user config from "azure" Jan 15 12:48:51.351341 ignition[886]: Ignition finished successfully Jan 15 12:48:51.353260 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:48:51.403557 ignition[892]: Ignition 2.19.0 Jan 15 12:48:51.373133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:48:51.403564 ignition[892]: Stage: kargs Jan 15 12:48:51.407675 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:48:51.403782 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.403792 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.434005 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:48:51.404753 ignition[892]: kargs: kargs passed Jan 15 12:48:51.404812 ignition[892]: Ignition finished successfully Jan 15 12:48:51.466801 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:48:51.459835 ignition[899]: Ignition 2.19.0 Jan 15 12:48:51.474320 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:51.459842 ignition[899]: Stage: disks Jan 15 12:48:51.485761 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:48:51.460048 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.499454 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:48:51.460057 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.509441 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:48:51.461799 ignition[899]: disks: disks passed Jan 15 12:48:51.522636 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:48:51.461863 ignition[899]: Ignition finished successfully Jan 15 12:48:51.550991 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:48:51.664089 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:48:51.675166 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:48:51.697960 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:48:51.758823 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:48:51.759455 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:48:51.765757 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:48:51.818805 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:51.831554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:48:51.837316 systemd-networkd[878]: eth0: Gained IPv6LL Jan 15 12:48:51.839923 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:48:51.905681 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (919) Jan 15 12:48:51.905713 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:51.905757 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:51.905776 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:51.866490 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:48:51.927512 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:51.866524 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:51.884629 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:48:51.948040 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:48:51.955671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:52.155808 systemd-networkd[878]: enP33283s1: Gained IPv6LL Jan 15 12:48:52.386139 coreos-metadata[921]: Jan 15 12:48:52.386 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:48:52.395301 coreos-metadata[921]: Jan 15 12:48:52.392 INFO Fetch successful Jan 15 12:48:52.395301 coreos-metadata[921]: Jan 15 12:48:52.392 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:48:52.413253 coreos-metadata[921]: Jan 15 12:48:52.413 INFO Fetch successful Jan 15 12:48:52.427809 coreos-metadata[921]: Jan 15 12:48:52.427 INFO wrote hostname ci-4081.3.0-a-f89ceb891c to /sysroot/etc/hostname Jan 15 12:48:52.438624 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:52.750887 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:48:52.791197 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:48:52.810587 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:48:52.821579 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:48:53.750655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:53.770059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:48:53.778913 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:48:53.807516 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:53.807086 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:48:53.833811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:48:53.847115 ignition[1037]: INFO : Ignition 2.19.0 Jan 15 12:48:53.847115 ignition[1037]: INFO : Stage: mount Jan 15 12:48:53.847115 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:53.847115 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:53.847115 ignition[1037]: INFO : mount: mount passed Jan 15 12:48:53.847115 ignition[1037]: INFO : Ignition finished successfully Jan 15 12:48:53.848317 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:48:53.883949 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:48:53.907927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:53.941068 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) Jan 15 12:48:53.941115 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:53.948375 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:53.954021 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:53.961729 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:53.963646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:53.995864 ignition[1065]: INFO : Ignition 2.19.0 Jan 15 12:48:53.995864 ignition[1065]: INFO : Stage: files Jan 15 12:48:54.005070 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:54.005070 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:54.005070 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:48:54.025652 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:48:54.025652 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:48:54.075320 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:48:54.083977 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:48:54.083977 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:48:54.076283 unknown[1065]: wrote ssh authorized keys file for user: core Jan 15 12:48:54.106735 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:54.106735 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:48:54.150439 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 12:48:54.253990 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:54.253990 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 15 12:48:54.722060 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 12:48:54.931188 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.931188 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 12:48:54.970979 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: files passed Jan 15 12:48:54.984054 ignition[1065]: INFO : Ignition finished successfully Jan 15 12:48:54.984253 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:48:55.030007 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:48:55.049890 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:48:55.081661 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:48:55.081777 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:48:55.125050 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:55.135037 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:55.135037 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:55.126430 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:55.143924 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:48:55.189029 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:48:55.228245 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:48:55.228398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:48:55.244169 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:48:55.259569 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:48:55.272649 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:48:55.294918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:48:55.318523 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:55.341033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:48:55.362428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:55.370221 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:55.385895 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:48:55.399336 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:48:55.399507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:55.419500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:48:55.433830 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:48:55.445753 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:48:55.459039 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:55.473706 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:55.488759 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:48:55.502403 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:55.517639 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:48:55.532639 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:48:55.545479 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:48:55.556751 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:48:55.556925 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:55.574794 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:55.588311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:55.603481 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:48:55.611116 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:55.619758 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:48:55.619934 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:55.641635 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:48:55.641827 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:55.655906 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:48:55.656057 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:48:55.668438 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:48:55.668582 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:55.706897 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:48:55.746708 ignition[1118]: INFO : Ignition 2.19.0 Jan 15 12:48:55.746708 ignition[1118]: INFO : Stage: umount Jan 15 12:48:55.746708 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:55.746708 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:55.746708 ignition[1118]: INFO : umount: umount passed Jan 15 12:48:55.746708 ignition[1118]: INFO : Ignition finished successfully Jan 15 12:48:55.729259 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:48:55.729456 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:55.740935 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:48:55.752880 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:48:55.753049 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:55.765852 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:48:55.765961 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:55.797733 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:48:55.797835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:48:55.810754 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:48:55.810872 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:48:55.826499 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:48:55.826552 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:48:55.840312 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:48:55.840360 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:48:55.859860 systemd[1]: Stopped target network.target - Network. Jan 15 12:48:55.881573 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:48:55.881658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:55.903170 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:48:55.915344 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:48:55.918759 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:55.930091 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:48:55.942201 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:48:55.960588 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:48:55.960643 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:55.967434 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:48:55.967482 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:55.980846 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:48:55.980898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:48:55.993088 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:48:55.993132 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:56.006291 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:48:56.019635 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:48:56.032315 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:48:56.033253 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:48:56.033352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:48:56.049126 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:48:56.049229 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:48:56.304318 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: Data path switched from VF: enP33283s1 Jan 15 12:48:56.055228 systemd-networkd[878]: eth0: DHCPv6 lease lost Jan 15 12:48:56.066289 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:48:56.066438 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:48:56.081102 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:48:56.081174 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:56.118002 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:48:56.127988 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:48:56.128073 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:56.141891 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:48:56.141951 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:56.154276 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:48:56.154322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:56.168365 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:48:56.168435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:56.182428 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:56.231185 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:48:56.231371 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:56.245147 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:48:56.245203 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:56.259004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:48:56.259042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:56.274021 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:48:56.274078 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:56.304388 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:48:56.304457 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:56.316075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:56.316144 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:56.361013 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:48:56.375560 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:48:56.375640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:56.396726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:56.396785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:56.411437 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:48:56.413631 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:48:56.455269 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:48:56.455396 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:48:56.651797 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 15 12:48:56.498740 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:48:56.498875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:48:56.511665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:48:56.525068 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:48:56.525148 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:56.557989 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:48:56.580050 systemd[1]: Switching root. Jan 15 12:48:56.693458 systemd-journald[217]: Journal stopped Jan 15 12:48:45.364819 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:48:45.364843 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:48:45.364852 kernel: KASLR enabled Jan 15 12:48:45.364858 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:48:45.364865 kernel: printk: bootconsole [pl11] enabled Jan 15 12:48:45.364871 kernel: efi: EFI v2.7 by EDK II Jan 15 12:48:45.364878 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:48:45.364884 kernel: random: crng init done Jan 15 12:48:45.364890 kernel: ACPI: Early table checksum verification disabled Jan 15 12:48:45.364896 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:48:45.364903 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364909 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364917 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:48:45.364923 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364931 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364937 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364944 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364952 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364958 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364965 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:48:45.364971 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:45.364977 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:48:45.364984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:48:45.364990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:48:45.364997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:48:45.365003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:48:45.365009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:48:45.365016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:48:45.365024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:48:45.365030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:48:45.365037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:48:45.365043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:48:45.365049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:48:45.365056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:48:45.365063 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 15 12:48:45.365069 kernel: Zone ranges: Jan 15 12:48:45.365076 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:48:45.365082 kernel: DMA32 empty Jan 15 12:48:45.365088 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:45.365095 kernel: Movable zone start for each node Jan 15 12:48:45.365106 kernel: Early memory node ranges Jan 15 12:48:45.365113 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:48:45.365120 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:48:45.365127 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:48:45.365134 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:48:45.365143 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:48:45.365150 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:48:45.365169 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:45.365176 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:48:45.365183 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:48:45.365190 kernel: psci: probing for conduit method from ACPI. Jan 15 12:48:45.365197 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:48:45.365204 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:48:45.365211 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:48:45.365218 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:48:45.365225 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:48:45.365232 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:48:45.365240 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:48:45.365247 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:48:45.365254 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:48:45.365261 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:48:45.365269 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:48:45.365276 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:48:45.365283 kernel: CPU features: detected: Spectre-BHB Jan 15 12:48:45.365289 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:48:45.365297 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:48:45.365303 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:48:45.365310 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:48:45.365319 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:48:45.365326 kernel: alternatives: applying boot alternatives Jan 15 12:48:45.365334 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:45.365341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:48:45.365348 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:48:45.365355 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:48:45.365361 kernel: Fallback order for Node 0: 0 Jan 15 12:48:45.365368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:48:45.365375 kernel: Policy zone: Normal Jan 15 12:48:45.365382 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:48:45.365389 kernel: software IO TLB: area num 2. Jan 15 12:48:45.365398 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:48:45.365405 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 15 12:48:45.365412 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:48:45.365419 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:48:45.365426 kernel: rcu: RCU event tracing is enabled. Jan 15 12:48:45.365433 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:48:45.365440 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:48:45.365447 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:48:45.365454 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:48:45.365461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:48:45.365467 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:48:45.365476 kernel: GICv3: 960 SPIs implemented Jan 15 12:48:45.365483 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:48:45.365489 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:48:45.365496 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:48:45.365503 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:48:45.365510 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:48:45.365517 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:48:45.365524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:45.365557 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:48:45.365564 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:48:45.365571 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:48:45.365581 kernel: Console: colour dummy device 80x25 Jan 15 12:48:45.365588 kernel: printk: console [tty1] enabled Jan 15 12:48:45.365595 kernel: ACPI: Core revision 20230628 Jan 15 12:48:45.365602 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:48:45.365609 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:48:45.365616 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:48:45.365623 kernel: landlock: Up and running. Jan 15 12:48:45.365630 kernel: SELinux: Initializing. Jan 15 12:48:45.365637 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.365644 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.365653 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:45.365660 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:45.365667 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:48:45.365674 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:48:45.365681 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:48:45.365688 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:48:45.365695 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:48:45.365709 kernel: Remapping and enabling EFI services. Jan 15 12:48:45.365717 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:48:45.365724 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:48:45.365731 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:48:45.365740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:45.365748 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:48:45.365755 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:48:45.365763 kernel: SMP: Total of 2 processors activated. Jan 15 12:48:45.365770 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:48:45.365779 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:48:45.365786 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:48:45.365794 kernel: CPU features: detected: CRC32 instructions Jan 15 12:48:45.365801 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:48:45.365809 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:48:45.365816 kernel: CPU features: detected: Privileged Access Never Jan 15 12:48:45.365823 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:48:45.365831 kernel: alternatives: applying system-wide alternatives Jan 15 12:48:45.365838 kernel: devtmpfs: initialized Jan 15 12:48:45.365847 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:48:45.365854 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:48:45.365862 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:48:45.365869 kernel: SMBIOS 3.1.0 present. Jan 15 12:48:45.365877 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:48:45.365884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:48:45.365892 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:48:45.365899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:48:45.365907 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:48:45.365916 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:48:45.365923 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:48:45.365930 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:48:45.365938 kernel: cpuidle: using governor menu Jan 15 12:48:45.365945 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:48:45.365953 kernel: ASID allocator initialised with 32768 entries Jan 15 12:48:45.365960 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:48:45.365968 kernel: Serial: AMBA PL011 UART driver Jan 15 12:48:45.365975 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:48:45.365984 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:48:45.365992 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:48:45.365999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:48:45.366006 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:48:45.366014 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:48:45.366021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:48:45.366029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:48:45.366036 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:48:45.366043 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:48:45.366052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:48:45.366059 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:48:45.366067 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:48:45.366074 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:48:45.366082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:48:45.366089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:48:45.366096 kernel: ACPI: Interpreter enabled Jan 15 12:48:45.366103 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:48:45.366111 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:48:45.366120 kernel: printk: console [ttyAMA0] enabled Jan 15 12:48:45.366127 kernel: printk: bootconsole [pl11] disabled Jan 15 12:48:45.366135 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:48:45.366142 kernel: iommu: Default domain type: Translated Jan 15 12:48:45.366149 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:48:45.366157 kernel: efivars: Registered efivars operations Jan 15 12:48:45.366164 kernel: vgaarb: loaded Jan 15 12:48:45.366171 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:48:45.366178 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:48:45.366187 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:48:45.366195 kernel: pnp: PnP ACPI init Jan 15 12:48:45.366202 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:48:45.366209 kernel: NET: Registered PF_INET protocol family Jan 15 12:48:45.366217 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:48:45.366224 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:48:45.366232 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:48:45.366239 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:48:45.366252 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:48:45.366261 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:48:45.366269 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.366276 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:45.366283 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:48:45.366291 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:48:45.366298 kernel: kvm [1]: HYP mode not available Jan 15 12:48:45.366306 kernel: Initialise system trusted keyrings Jan 15 12:48:45.366313 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:48:45.366321 kernel: Key type asymmetric registered Jan 15 12:48:45.366329 kernel: Asymmetric key parser 'x509' registered Jan 15 12:48:45.366337 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:48:45.366344 kernel: io scheduler mq-deadline registered Jan 15 12:48:45.366351 kernel: io scheduler kyber registered Jan 15 12:48:45.366359 kernel: io scheduler bfq registered Jan 15 12:48:45.366366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:48:45.366374 kernel: thunder_xcv, ver 1.0 Jan 15 12:48:45.366381 kernel: thunder_bgx, ver 1.0 Jan 15 12:48:45.366389 kernel: nicpf, ver 1.0 Jan 15 12:48:45.366396 kernel: nicvf, ver 1.0 Jan 15 12:48:45.366565 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:48:45.366647 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:48:44 UTC (1736945324) Jan 15 12:48:45.366658 kernel: efifb: probing for efifb Jan 15 12:48:45.366666 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:48:45.366673 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:48:45.366681 kernel: efifb: scrolling: redraw Jan 15 12:48:45.366688 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:48:45.366698 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:48:45.366706 kernel: fb0: EFI VGA frame buffer device Jan 15 12:48:45.366713 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:48:45.366721 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:48:45.366728 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:48:45.366735 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:48:45.366743 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:48:45.366750 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:48:45.366758 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:48:45.366767 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:48:45.366774 kernel: Segment Routing with IPv6 Jan 15 12:48:45.366782 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:48:45.366789 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:48:45.366796 kernel: Key type dns_resolver registered Jan 15 12:48:45.366804 kernel: registered taskstats version 1 Jan 15 12:48:45.366811 kernel: Loading compiled-in X.509 certificates Jan 15 12:48:45.366818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:48:45.366826 kernel: Key type .fscrypt registered Jan 15 12:48:45.366848 kernel: Key type fscrypt-provisioning registered Jan 15 12:48:45.366856 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:48:45.366863 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:48:45.366871 kernel: ima: No architecture policies found Jan 15 12:48:45.366878 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:48:45.366886 kernel: clk: Disabling unused clocks Jan 15 12:48:45.366893 kernel: Freeing unused kernel memory: 39360K Jan 15 12:48:45.366901 kernel: Run /init as init process Jan 15 12:48:45.366908 kernel: with arguments: Jan 15 12:48:45.366918 kernel: /init Jan 15 12:48:45.366925 kernel: with environment: Jan 15 12:48:45.366932 kernel: HOME=/ Jan 15 12:48:45.366939 kernel: TERM=linux Jan 15 12:48:45.366947 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:48:45.366956 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:48:45.366966 systemd[1]: Detected virtualization microsoft. Jan 15 12:48:45.366974 systemd[1]: Detected architecture arm64. Jan 15 12:48:45.366983 systemd[1]: Running in initrd. Jan 15 12:48:45.366991 systemd[1]: No hostname configured, using default hostname. Jan 15 12:48:45.366999 systemd[1]: Hostname set to . Jan 15 12:48:45.367007 systemd[1]: Initializing machine ID from random generator. Jan 15 12:48:45.367015 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:48:45.367023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:45.367031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:45.367040 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:48:45.367049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:48:45.367057 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:48:45.367066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:48:45.367075 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:48:45.367083 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:48:45.367091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:45.367099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:45.367109 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:48:45.367117 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:48:45.367125 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:48:45.367133 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:48:45.367141 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:45.367149 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:45.367157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:48:45.367165 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:48:45.367175 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:45.367183 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:45.367191 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:45.367199 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:48:45.367207 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:48:45.367215 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:48:45.367223 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:48:45.367231 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:48:45.367239 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:48:45.367249 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:48:45.367271 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:48:45.367290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.367299 systemd-journald[217]: Journal started Jan 15 12:48:45.367320 systemd-journald[217]: Runtime Journal (/run/log/journal/78588ac9abaf43128b6eca2c016b2dbd) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:48:45.373982 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:48:45.405381 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:48:45.405439 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:48:45.394197 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:45.411879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:45.431119 kernel: Bridge firewalling registered Jan 15 12:48:45.430271 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:48:45.440973 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:48:45.450345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:45.461169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.486851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:45.501576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:48:45.514704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:48:45.532644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:48:45.557364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:45.564442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:48:45.577634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:45.592958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:45.617765 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:48:45.626009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:48:45.645637 dracut-cmdline[250]: dracut-dracut-053 Jan 15 12:48:45.654750 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:45.647682 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:48:45.712599 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:45.714870 systemd-resolved[251]: Positive Trust Anchors: Jan 15 12:48:45.714880 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:48:45.714912 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:48:45.717123 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 15 12:48:45.722403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:48:45.736942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:45.868566 kernel: SCSI subsystem initialized Jan 15 12:48:45.876544 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:48:45.887549 kernel: iscsi: registered transport (tcp) Jan 15 12:48:45.905418 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:48:45.905457 kernel: QLogic iSCSI HBA Driver Jan 15 12:48:45.946165 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:45.963790 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:48:45.991814 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:48:45.991852 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:48:45.998263 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:48:46.047549 kernel: raid6: neonx8 gen() 15742 MB/s Jan 15 12:48:46.067541 kernel: raid6: neonx4 gen() 15662 MB/s Jan 15 12:48:46.087544 kernel: raid6: neonx2 gen() 13236 MB/s Jan 15 12:48:46.108544 kernel: raid6: neonx1 gen() 10486 MB/s Jan 15 12:48:46.128545 kernel: raid6: int64x8 gen() 6960 MB/s Jan 15 12:48:46.148539 kernel: raid6: int64x4 gen() 7349 MB/s Jan 15 12:48:46.169559 kernel: raid6: int64x2 gen() 6131 MB/s Jan 15 12:48:46.193189 kernel: raid6: int64x1 gen() 5061 MB/s Jan 15 12:48:46.193216 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Jan 15 12:48:46.217281 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Jan 15 12:48:46.217294 kernel: raid6: using neon recovery algorithm Jan 15 12:48:46.230017 kernel: xor: measuring software checksum speed Jan 15 12:48:46.230035 kernel: 8regs : 19797 MB/sec Jan 15 12:48:46.233729 kernel: 32regs : 19646 MB/sec Jan 15 12:48:46.237296 kernel: arm64_neon : 26954 MB/sec Jan 15 12:48:46.241664 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 15 12:48:46.293689 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:48:46.303031 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:46.319707 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:46.350805 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 15 12:48:46.356699 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:46.376663 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:48:46.410274 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 15 12:48:46.440658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:46.455940 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:48:46.493926 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:46.511693 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:48:46.547644 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:46.561771 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:46.577000 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:46.586839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:48:46.622402 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:48:46.617705 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:48:46.628766 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:46.628891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:46.706433 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:48:46.706462 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:48:46.706472 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:48:46.706482 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:48:46.706501 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:48:46.706511 kernel: PTP clock support registered Jan 15 12:48:46.706521 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:48:46.717780 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:48:46.644486 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:46.753788 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:48:46.753818 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:48:46.669311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:46.778321 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:48:46.778348 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: VF slot 1 added Jan 15 12:48:46.778510 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:48:46.778542 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:48:46.669518 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.334854 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:48:47.334878 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:48:47.334889 kernel: scsi host0: storvsc_host_t Jan 15 12:48:47.335097 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:48:47.335125 kernel: scsi host1: storvsc_host_t Jan 15 12:48:47.335256 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:48:46.739652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:46.778882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:47.298970 systemd-resolved[251]: Clock change detected. Flushing caches. Jan 15 12:48:47.320515 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:47.341780 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.355732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:47.355789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.426673 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:48:47.426699 kernel: hv_pci d5ef9fb2-8203-45de-a683-61b20ea5d1b8: PCI VMBus probing: Using version 0x10004 Jan 15 12:48:47.557622 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 12:48:47.557820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:48:47.557840 kernel: hv_pci d5ef9fb2-8203-45de-a683-61b20ea5d1b8: PCI host bridge to bus 8203:00 Jan 15 12:48:47.557934 kernel: pci_bus 8203:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:48:47.558042 kernel: pci_bus 8203:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:48:47.558128 kernel: pci 8203:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:48:47.558247 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:48:47.558357 kernel: pci 8203:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:47.558453 kernel: pci 8203:00:02.0: enabling Extended Tags Jan 15 12:48:47.558539 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:48:47.558632 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:48:47.558748 kernel: pci 8203:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8203:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:48:47.558856 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 12:48:47.558967 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:48:47.559074 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:48:47.559177 kernel: pci_bus 8203:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:48:47.559267 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:47.559277 kernel: pci 8203:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:47.559364 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 12:48:47.363964 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:47.389483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:47.474267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:47.487958 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:47.582444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:47.624736 kernel: mlx5_core 8203:00:02.0: enabling device (0000 -> 0002) Jan 15 12:48:47.843745 kernel: mlx5_core 8203:00:02.0: firmware version: 16.30.1284 Jan 15 12:48:47.843961 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: VF registering: eth1 Jan 15 12:48:47.844487 kernel: mlx5_core 8203:00:02.0 eth1: joined to eth0 Jan 15 12:48:47.844608 kernel: mlx5_core 8203:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:48:47.852769 kernel: mlx5_core 8203:00:02.0 enP33283s1: renamed from eth1 Jan 15 12:48:48.098598 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:48:48.171768 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (500) Jan 15 12:48:48.185799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:48:48.239047 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (483) Jan 15 12:48:48.252691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:48:48.261033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:48:48.287318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:48:48.309961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:48:48.338319 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:48.344822 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:49.354060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:49.354115 disk-uuid[608]: The operation has completed successfully. Jan 15 12:48:49.422033 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:48:49.423917 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:48:49.453888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:48:49.468016 sh[694]: Success Jan 15 12:48:49.508761 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:48:49.694000 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:48:49.723851 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:48:49.735460 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:48:49.770826 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:48:49.770880 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:49.777889 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:48:49.783199 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:48:49.787421 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:48:50.050552 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:48:50.056118 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:48:50.076988 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:48:50.091307 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:48:50.124136 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:50.124157 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:50.124167 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:50.145796 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:50.153857 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:48:50.167792 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:50.174209 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:48:50.186981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:48:50.240709 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:50.264938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:48:50.288323 systemd-networkd[878]: lo: Link UP Jan 15 12:48:50.288336 systemd-networkd[878]: lo: Gained carrier Jan 15 12:48:50.289988 systemd-networkd[878]: Enumeration completed Jan 15 12:48:50.290608 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:50.290611 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:48:50.292588 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:48:50.302912 systemd[1]: Reached target network.target - Network. Jan 15 12:48:50.374735 kernel: mlx5_core 8203:00:02.0 enP33283s1: Link up Jan 15 12:48:50.412734 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: Data path switched to VF: enP33283s1 Jan 15 12:48:50.413511 systemd-networkd[878]: enP33283s1: Link UP Jan 15 12:48:50.413623 systemd-networkd[878]: eth0: Link UP Jan 15 12:48:50.413781 systemd-networkd[878]: eth0: Gained carrier Jan 15 12:48:50.413789 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:50.438239 systemd-networkd[878]: enP33283s1: Gained carrier Jan 15 12:48:50.452784 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:48:51.176810 ignition[820]: Ignition 2.19.0 Jan 15 12:48:51.176822 ignition[820]: Stage: fetch-offline Jan 15 12:48:51.180504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:51.176862 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.193963 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:48:51.176871 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.177000 ignition[820]: parsed url from cmdline: "" Jan 15 12:48:51.177003 ignition[820]: no config URL provided Jan 15 12:48:51.177007 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:51.177015 ignition[820]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:51.177020 ignition[820]: failed to fetch config: resource requires networking Jan 15 12:48:51.177241 ignition[820]: Ignition finished successfully Jan 15 12:48:51.225375 ignition[886]: Ignition 2.19.0 Jan 15 12:48:51.225381 ignition[886]: Stage: fetch Jan 15 12:48:51.225552 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.225562 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.225664 ignition[886]: parsed url from cmdline: "" Jan 15 12:48:51.225667 ignition[886]: no config URL provided Jan 15 12:48:51.225672 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:51.225680 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:51.225700 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:48:51.345967 ignition[886]: GET result: OK Jan 15 12:48:51.346623 ignition[886]: config has been read from IMDS userdata Jan 15 12:48:51.346663 ignition[886]: parsing config with SHA512: 85324a30c14f99b17a0d5a6404cea2fca433012ddf4458496437231b978ab890755feef76179084e81ebcf89a4ae724ce3088e9b28ba043daa502e4ee6e24f57 Jan 15 12:48:51.350928 unknown[886]: fetched base config from "system" Jan 15 12:48:51.351294 ignition[886]: fetch: fetch complete Jan 15 12:48:51.350936 unknown[886]: fetched base config from "system" Jan 15 12:48:51.351298 ignition[886]: fetch: fetch passed Jan 15 12:48:51.350941 unknown[886]: fetched user config from "azure" Jan 15 12:48:51.351341 ignition[886]: Ignition finished successfully Jan 15 12:48:51.353260 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:48:51.403557 ignition[892]: Ignition 2.19.0 Jan 15 12:48:51.373133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:48:51.403564 ignition[892]: Stage: kargs Jan 15 12:48:51.407675 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:48:51.403782 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.403792 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.434005 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:48:51.404753 ignition[892]: kargs: kargs passed Jan 15 12:48:51.404812 ignition[892]: Ignition finished successfully Jan 15 12:48:51.466801 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:48:51.459835 ignition[899]: Ignition 2.19.0 Jan 15 12:48:51.474320 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:51.459842 ignition[899]: Stage: disks Jan 15 12:48:51.485761 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:48:51.460048 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.499454 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:48:51.460057 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.509441 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:48:51.461799 ignition[899]: disks: disks passed Jan 15 12:48:51.522636 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:48:51.461863 ignition[899]: Ignition finished successfully Jan 15 12:48:51.550991 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:48:51.664089 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:48:51.675166 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:48:51.697960 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:48:51.758823 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:48:51.759455 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:48:51.765757 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:48:51.818805 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:51.831554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:48:51.837316 systemd-networkd[878]: eth0: Gained IPv6LL Jan 15 12:48:51.839923 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:48:51.905681 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (919) Jan 15 12:48:51.905713 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:51.905757 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:51.905776 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:51.866490 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:48:51.927512 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:51.866524 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:51.884629 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:48:51.948040 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:48:51.955671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:52.155808 systemd-networkd[878]: enP33283s1: Gained IPv6LL Jan 15 12:48:52.386139 coreos-metadata[921]: Jan 15 12:48:52.386 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:48:52.395301 coreos-metadata[921]: Jan 15 12:48:52.392 INFO Fetch successful Jan 15 12:48:52.395301 coreos-metadata[921]: Jan 15 12:48:52.392 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:48:52.413253 coreos-metadata[921]: Jan 15 12:48:52.413 INFO Fetch successful Jan 15 12:48:52.427809 coreos-metadata[921]: Jan 15 12:48:52.427 INFO wrote hostname ci-4081.3.0-a-f89ceb891c to /sysroot/etc/hostname Jan 15 12:48:52.438624 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:52.750887 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:48:52.791197 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:48:52.810587 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:48:52.821579 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:48:53.750655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:53.770059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:48:53.778913 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:48:53.807516 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:53.807086 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:48:53.833811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:48:53.847115 ignition[1037]: INFO : Ignition 2.19.0 Jan 15 12:48:53.847115 ignition[1037]: INFO : Stage: mount Jan 15 12:48:53.847115 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:53.847115 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:53.847115 ignition[1037]: INFO : mount: mount passed Jan 15 12:48:53.847115 ignition[1037]: INFO : Ignition finished successfully Jan 15 12:48:53.848317 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:48:53.883949 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:48:53.907927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:53.941068 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) Jan 15 12:48:53.941115 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:53.948375 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:53.954021 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:53.961729 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:53.963646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:53.995864 ignition[1065]: INFO : Ignition 2.19.0 Jan 15 12:48:53.995864 ignition[1065]: INFO : Stage: files Jan 15 12:48:54.005070 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:54.005070 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:54.005070 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:48:54.025652 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:48:54.025652 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:48:54.075320 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:48:54.083977 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:48:54.083977 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:48:54.076283 unknown[1065]: wrote ssh authorized keys file for user: core Jan 15 12:48:54.106735 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:54.106735 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:48:54.150439 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 12:48:54.253990 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:54.253990 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.280210 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 15 12:48:54.722060 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 12:48:54.931188 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 15 12:48:54.931188 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 12:48:54.970979 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:54.984054 ignition[1065]: INFO : files: files passed Jan 15 12:48:54.984054 ignition[1065]: INFO : Ignition finished successfully Jan 15 12:48:54.984253 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:48:55.030007 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:48:55.049890 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:48:55.081661 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:48:55.081777 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:48:55.125050 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:55.135037 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:55.135037 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:55.126430 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:55.143924 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:48:55.189029 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:48:55.228245 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:48:55.228398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:48:55.244169 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:48:55.259569 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:48:55.272649 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:48:55.294918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:48:55.318523 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:55.341033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:48:55.362428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:55.370221 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:55.385895 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:48:55.399336 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:48:55.399507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:55.419500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:48:55.433830 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:48:55.445753 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:48:55.459039 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:55.473706 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:55.488759 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:48:55.502403 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:55.517639 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:48:55.532639 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:48:55.545479 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:48:55.556751 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:48:55.556925 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:55.574794 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:55.588311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:55.603481 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:48:55.611116 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:55.619758 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:48:55.619934 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:55.641635 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:48:55.641827 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:55.655906 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:48:55.656057 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:48:55.668438 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:48:55.668582 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:55.706897 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:48:55.746708 ignition[1118]: INFO : Ignition 2.19.0 Jan 15 12:48:55.746708 ignition[1118]: INFO : Stage: umount Jan 15 12:48:55.746708 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:55.746708 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:55.746708 ignition[1118]: INFO : umount: umount passed Jan 15 12:48:55.746708 ignition[1118]: INFO : Ignition finished successfully Jan 15 12:48:55.729259 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:48:55.729456 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:55.740935 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:48:55.752880 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:48:55.753049 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:55.765852 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:48:55.765961 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:55.797733 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:48:55.797835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:48:55.810754 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:48:55.810872 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:48:55.826499 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:48:55.826552 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:48:55.840312 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:48:55.840360 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:48:55.859860 systemd[1]: Stopped target network.target - Network. Jan 15 12:48:55.881573 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:48:55.881658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:55.903170 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:48:55.915344 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:48:55.918759 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:55.930091 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:48:55.942201 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:48:55.960588 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:48:55.960643 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:55.967434 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:48:55.967482 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:55.980846 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:48:55.980898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:48:55.993088 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:48:55.993132 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:56.006291 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:48:56.019635 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:48:56.032315 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:48:56.033253 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:48:56.033352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:48:56.049126 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:48:56.049229 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:48:56.304318 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: Data path switched from VF: enP33283s1 Jan 15 12:48:56.055228 systemd-networkd[878]: eth0: DHCPv6 lease lost Jan 15 12:48:56.066289 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:48:56.066438 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:48:56.081102 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:48:56.081174 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:56.118002 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:48:56.127988 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:48:56.128073 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:56.141891 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:48:56.141951 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:56.154276 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:48:56.154322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:56.168365 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:48:56.168435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:56.182428 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:56.231185 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:48:56.231371 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:56.245147 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:48:56.245203 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:56.259004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:48:56.259042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:56.274021 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:48:56.274078 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:56.304388 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:48:56.304457 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:56.316075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:56.316144 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:56.361013 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:48:56.375560 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:48:56.375640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:56.396726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:56.396785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:56.411437 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:48:56.413631 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:48:56.455269 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:48:56.455396 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:48:56.651797 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 15 12:48:56.498740 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:48:56.498875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:48:56.511665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:48:56.525068 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:48:56.525148 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:56.557989 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:48:56.580050 systemd[1]: Switching root. Jan 15 12:48:56.693458 systemd-journald[217]: Journal stopped Jan 15 12:49:00.906476 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 12:49:00.906524 kernel: SELinux: policy capability open_perms=1 Jan 15 12:49:00.906535 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 12:49:00.906544 kernel: SELinux: policy capability always_check_network=0 Jan 15 12:49:00.906558 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 12:49:00.906566 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 12:49:00.906575 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 12:49:00.906583 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 12:49:00.906591 kernel: audit: type=1403 audit(1736945337.839:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 12:49:00.906601 systemd[1]: Successfully loaded SELinux policy in 162.821ms. Jan 15 12:49:00.906613 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.599ms. Jan 15 12:49:00.906624 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:49:00.906633 systemd[1]: Detected virtualization microsoft. Jan 15 12:49:00.906642 systemd[1]: Detected architecture arm64. Jan 15 12:49:00.906652 systemd[1]: Detected first boot. Jan 15 12:49:00.906663 systemd[1]: Hostname set to . Jan 15 12:49:00.906673 systemd[1]: Initializing machine ID from random generator. Jan 15 12:49:00.906685 zram_generator::config[1159]: No configuration found. Jan 15 12:49:00.906695 systemd[1]: Populated /etc with preset unit settings. Jan 15 12:49:00.906704 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 12:49:00.906714 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 12:49:00.906746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 12:49:00.906761 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 12:49:00.906771 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 12:49:00.906781 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 12:49:00.906790 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 12:49:00.906799 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 12:49:00.906809 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 12:49:00.906818 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 12:49:00.906829 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 12:49:00.906839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:49:00.906849 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:49:00.906859 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 12:49:00.906868 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 12:49:00.906877 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 12:49:00.906887 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:49:00.906896 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 15 12:49:00.906907 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:49:00.906918 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 12:49:00.906927 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 12:49:00.906939 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 12:49:00.906949 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 12:49:00.906959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:49:00.906968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:49:00.906978 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:49:00.906990 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:49:00.906999 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 12:49:00.907009 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 12:49:00.907018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:49:00.907028 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:49:00.907038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:49:00.907049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 12:49:00.907059 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 12:49:00.907069 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 12:49:00.907078 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 12:49:00.907088 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 12:49:00.907098 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 12:49:00.907108 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 12:49:00.907120 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 12:49:00.907131 systemd[1]: Reached target machines.target - Containers. Jan 15 12:49:00.907140 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 12:49:00.907150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:00.907160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:49:00.907170 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 12:49:00.907180 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:00.907190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:49:00.907201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:00.907211 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 12:49:00.907221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:00.907232 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 12:49:00.907241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 12:49:00.907251 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 12:49:00.907261 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 12:49:00.907271 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 12:49:00.907281 kernel: fuse: init (API version 7.39) Jan 15 12:49:00.907290 kernel: loop: module loaded Jan 15 12:49:00.907299 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:49:00.907308 kernel: ACPI: bus type drm_connector registered Jan 15 12:49:00.907317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:49:00.907327 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 12:49:00.907338 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 12:49:00.907383 systemd-journald[1262]: Collecting audit messages is disabled. Jan 15 12:49:00.907407 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:49:00.907418 systemd-journald[1262]: Journal started Jan 15 12:49:00.907438 systemd-journald[1262]: Runtime Journal (/run/log/journal/b379242615f447febacfdcd4b7538552) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:48:59.727375 systemd[1]: Queued start job for default target multi-user.target. Jan 15 12:48:59.876791 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 15 12:48:59.877188 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 12:48:59.877545 systemd[1]: systemd-journald.service: Consumed 3.673s CPU time. Jan 15 12:49:00.925429 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 12:49:00.925495 systemd[1]: Stopped verity-setup.service. Jan 15 12:49:00.944838 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:49:00.945778 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 12:49:00.952089 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 12:49:00.958622 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 12:49:00.964469 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 12:49:00.970834 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 12:49:00.977836 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 12:49:00.983761 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 12:49:00.992325 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:49:01.000098 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 12:49:01.000251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 12:49:01.009229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:01.009364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:01.016251 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:49:01.016408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:49:01.023071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:01.023219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:01.031346 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 12:49:01.031491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 12:49:01.038311 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:01.038450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:01.044912 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:49:01.051431 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 12:49:01.059438 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 12:49:01.067302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:49:01.085391 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 12:49:01.098823 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 12:49:01.106436 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 12:49:01.112815 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 12:49:01.112859 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:49:01.121513 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 12:49:01.138896 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 12:49:01.146672 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 12:49:01.154103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:01.157951 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 12:49:01.166012 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 12:49:01.172715 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:49:01.173807 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 12:49:01.180398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:49:01.181597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:49:01.190957 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 12:49:01.201891 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 12:49:01.217002 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 12:49:01.226161 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 12:49:01.235893 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 12:49:01.248396 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 12:49:01.258862 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 12:49:01.265748 kernel: loop0: detected capacity change from 0 to 194096 Jan 15 12:49:01.282199 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 12:49:01.289942 systemd-journald[1262]: Time spent on flushing to /var/log/journal/b379242615f447febacfdcd4b7538552 is 17.382ms for 904 entries. Jan 15 12:49:01.289942 systemd-journald[1262]: System Journal (/var/log/journal/b379242615f447febacfdcd4b7538552) is 8.0M, max 2.6G, 2.6G free. Jan 15 12:49:01.352968 systemd-journald[1262]: Received client request to flush runtime journal. Jan 15 12:49:01.353029 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 12:49:01.303035 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 12:49:01.310851 udevadm[1296]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 15 12:49:01.355762 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 12:49:01.385808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:49:01.398748 kernel: loop1: detected capacity change from 0 to 114432 Jan 15 12:49:01.436765 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 12:49:01.438868 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 12:49:01.470338 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 12:49:01.483887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:49:01.569628 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 15 12:49:01.569643 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 15 12:49:01.573952 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:49:01.871744 kernel: loop2: detected capacity change from 0 to 114328 Jan 15 12:49:02.403752 kernel: loop3: detected capacity change from 0 to 31320 Jan 15 12:49:02.730384 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 12:49:02.742889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:49:02.766556 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 15 12:49:02.824174 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:49:02.849849 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:49:02.902486 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 15 12:49:02.919903 kernel: loop4: detected capacity change from 0 to 194096 Jan 15 12:49:02.919969 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 12:49:02.945747 kernel: loop5: detected capacity change from 0 to 114432 Jan 15 12:49:02.957759 kernel: loop6: detected capacity change from 0 to 114328 Jan 15 12:49:02.967626 kernel: loop7: detected capacity change from 0 to 31320 Jan 15 12:49:02.974154 (sd-merge)[1343]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 15 12:49:02.974608 (sd-merge)[1343]: Merged extensions into '/usr'. Jan 15 12:49:02.983491 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 12:49:02.983509 systemd[1]: Reloading... Jan 15 12:49:03.100745 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 12:49:03.108774 kernel: hv_vmbus: registering driver hv_balloon Jan 15 12:49:03.125450 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 15 12:49:03.125559 zram_generator::config[1394]: No configuration found. Jan 15 12:49:03.125595 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 15 12:49:03.137817 kernel: hv_vmbus: registering driver hyperv_fb Jan 15 12:49:03.157451 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 15 12:49:03.157552 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 15 12:49:03.175357 kernel: Console: switching to colour dummy device 80x25 Jan 15 12:49:03.186271 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:49:03.193589 systemd-networkd[1331]: lo: Link UP Jan 15 12:49:03.194314 systemd-networkd[1331]: lo: Gained carrier Jan 15 12:49:03.197517 systemd-networkd[1331]: Enumeration completed Jan 15 12:49:03.198191 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:03.200755 systemd-networkd[1331]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:03.227890 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1340) Jan 15 12:49:03.267755 kernel: mlx5_core 8203:00:02.0 enP33283s1: Link up Jan 15 12:49:03.299761 kernel: hv_netvsc 002248bd-4fac-0022-48bd-4fac002248bd eth0: Data path switched to VF: enP33283s1 Jan 15 12:49:03.300260 systemd-networkd[1331]: enP33283s1: Link UP Jan 15 12:49:03.300373 systemd-networkd[1331]: eth0: Link UP Jan 15 12:49:03.300376 systemd-networkd[1331]: eth0: Gained carrier Jan 15 12:49:03.300392 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:03.305054 systemd-networkd[1331]: enP33283s1: Gained carrier Jan 15 12:49:03.312787 systemd-networkd[1331]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:03.333751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:03.405008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:49:03.412026 systemd[1]: Reloading finished in 428 ms. Jan 15 12:49:03.441014 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 12:49:03.447969 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:49:03.454937 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 12:49:03.498080 systemd[1]: Starting ensure-sysext.service... Jan 15 12:49:03.503506 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 12:49:03.512156 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 12:49:03.523292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:49:03.538997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:03.546993 systemd-tmpfiles[1486]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 12:49:03.547652 systemd-tmpfiles[1486]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 12:49:03.548535 systemd-tmpfiles[1486]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 12:49:03.548774 systemd-tmpfiles[1486]: ACLs are not supported, ignoring. Jan 15 12:49:03.548819 systemd-tmpfiles[1486]: ACLs are not supported, ignoring. Jan 15 12:49:03.550757 systemd[1]: Reloading requested from client PID 1483 ('systemctl') (unit ensure-sysext.service)... Jan 15 12:49:03.550772 systemd[1]: Reloading... Jan 15 12:49:03.561807 systemd-tmpfiles[1486]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:49:03.561829 systemd-tmpfiles[1486]: Skipping /boot Jan 15 12:49:03.573316 systemd-tmpfiles[1486]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:49:03.573332 systemd-tmpfiles[1486]: Skipping /boot Jan 15 12:49:03.643858 zram_generator::config[1521]: No configuration found. Jan 15 12:49:03.749813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:03.820946 systemd[1]: Reloading finished in 269 ms. Jan 15 12:49:03.837226 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 12:49:03.850244 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 12:49:03.858495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:49:03.866788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:03.866962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:03.886997 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:49:03.895014 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 12:49:03.904002 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 12:49:03.913016 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 12:49:03.931067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:49:03.940788 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 12:49:03.951538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:03.963659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:03.969011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:03.987799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:04.000312 lvm[1583]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:49:04.005163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:04.012914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:04.013733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:04.013930 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:04.023796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:04.023962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:04.032194 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:04.032366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:04.046284 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 12:49:04.057162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:49:04.064971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:04.072982 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 12:49:04.079383 lvm[1610]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:49:04.083148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:04.094081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:04.104755 augenrules[1612]: No rules Jan 15 12:49:04.108057 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:04.114326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:04.115595 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:49:04.125789 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 12:49:04.126575 systemd-resolved[1590]: Positive Trust Anchors: Jan 15 12:49:04.126930 systemd-resolved[1590]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:49:04.127008 systemd-resolved[1590]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:49:04.134582 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 12:49:04.143288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:04.143461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:04.145797 systemd-resolved[1590]: Using system hostname 'ci-4081.3.0-a-f89ceb891c'. Jan 15 12:49:04.151171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:49:04.158601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:04.159695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:04.168100 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:04.168241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:04.182947 systemd[1]: Reached target network.target - Network. Jan 15 12:49:04.188934 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:49:04.197408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:04.203978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:04.211693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:49:04.222480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:04.233070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:04.240687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:04.241001 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 12:49:04.248662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:04.249795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:04.258528 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:49:04.258677 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:49:04.267483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:04.267629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:04.278643 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 12:49:04.288474 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:04.288643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:04.300771 systemd[1]: Finished ensure-sysext.service. Jan 15 12:49:04.310972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:49:04.311200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:49:04.699916 systemd-networkd[1331]: eth0: Gained IPv6LL Jan 15 12:49:04.703785 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 12:49:04.712460 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 12:49:04.942529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:05.064424 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 12:49:05.073990 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 12:49:05.083803 systemd-networkd[1331]: enP33283s1: Gained IPv6LL Jan 15 12:49:07.577428 ldconfig[1288]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 12:49:07.880988 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 12:49:07.894882 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 12:49:07.903519 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 12:49:07.912333 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:49:07.919130 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 12:49:07.926822 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 12:49:07.934643 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 12:49:07.941263 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 12:49:07.948647 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 12:49:07.956172 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 12:49:07.956210 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:49:07.961584 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:49:08.237226 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 12:49:08.245410 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 12:49:08.278522 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 12:49:08.285130 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 12:49:08.291464 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:49:08.297080 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:49:08.302502 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:49:08.302535 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:49:08.314858 systemd[1]: Starting chronyd.service - NTP client/server... Jan 15 12:49:08.322877 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 12:49:08.334947 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 12:49:08.343270 (chronyd)[1646]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 15 12:49:08.351404 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 12:49:08.358151 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 12:49:08.367961 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 12:49:08.368813 jq[1652]: false Jan 15 12:49:08.373926 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 12:49:08.373971 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 15 12:49:08.383909 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 15 12:49:08.390305 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 15 12:49:08.393003 KVP[1654]: KVP starting; pid is:1654 Jan 15 12:49:08.393422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:08.404992 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 12:49:08.413952 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 12:49:08.422918 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 12:49:08.431519 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 12:49:08.440917 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 12:49:08.454298 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 12:49:08.462239 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 12:49:08.462748 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 12:49:08.466935 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 12:49:08.475882 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 12:49:08.488850 jq[1669]: true Jan 15 12:49:08.490253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 12:49:08.490435 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 12:49:08.507134 kernel: hv_utils: KVP IC version 4.0 Jan 15 12:49:08.499227 KVP[1654]: KVP LIC Version: 3.1 Jan 15 12:49:08.499582 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 12:49:08.499786 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 12:49:08.528178 jq[1673]: true Jan 15 12:49:08.556901 chronyd[1697]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 15 12:49:08.559180 chronyd[1697]: Timezone right/UTC failed leap second check, ignoring Jan 15 12:49:08.559369 chronyd[1697]: Loaded seccomp filter (level 2) Jan 15 12:49:08.560195 systemd[1]: Started chronyd.service - NTP client/server. Jan 15 12:49:08.615777 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 12:49:08.615953 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 12:49:08.629935 (ntainerd)[1702]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 12:49:08.659816 extend-filesystems[1653]: Found loop4 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found loop5 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found loop6 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found loop7 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda1 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda2 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda3 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found usr Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda4 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda6 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda7 Jan 15 12:49:08.659816 extend-filesystems[1653]: Found sda9 Jan 15 12:49:08.659816 extend-filesystems[1653]: Checking size of /dev/sda9 Jan 15 12:49:08.741149 tar[1672]: linux-arm64/helm Jan 15 12:49:08.715123 systemd-logind[1665]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 15 12:49:08.720896 systemd-logind[1665]: New seat seat0. Jan 15 12:49:08.721817 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 12:49:08.750818 update_engine[1668]: I20250115 12:49:08.747895 1668 main.cc:92] Flatcar Update Engine starting Jan 15 12:49:09.111738 tar[1672]: linux-arm64/LICENSE Jan 15 12:49:09.111838 tar[1672]: linux-arm64/README.md Jan 15 12:49:09.124695 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 12:49:09.175385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:09.183145 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:09.444264 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 12:49:09.513676 dbus-daemon[1649]: [system] SELinux support is enabled Jan 15 12:49:09.514532 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 12:49:09.523602 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 12:49:09.525927 dbus-daemon[1649]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 12:49:09.523643 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 12:49:09.531594 update_engine[1668]: I20250115 12:49:09.531527 1668 update_check_scheduler.cc:74] Next update check in 11m56s Jan 15 12:49:09.532228 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 12:49:09.532256 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 12:49:09.540113 systemd[1]: Started update-engine.service - Update Engine. Jan 15 12:49:09.555524 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 12:49:09.619634 kubelet[1728]: E0115 12:49:09.619583 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:09.622487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:09.622645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:09.925297 extend-filesystems[1653]: Old size kept for /dev/sda9 Jan 15 12:49:09.925297 extend-filesystems[1653]: Found sr0 Jan 15 12:49:09.899015 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 12:49:09.899191 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 12:49:09.961489 coreos-metadata[1648]: Jan 15 12:49:09.960 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:49:09.967737 coreos-metadata[1648]: Jan 15 12:49:09.965 INFO Fetch successful Jan 15 12:49:09.967737 coreos-metadata[1648]: Jan 15 12:49:09.965 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 15 12:49:09.969780 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1719) Jan 15 12:49:09.970444 coreos-metadata[1648]: Jan 15 12:49:09.970 INFO Fetch successful Jan 15 12:49:09.970444 coreos-metadata[1648]: Jan 15 12:49:09.970 INFO Fetching http://168.63.129.16/machine/25cb939d-727e-42ca-883e-ebf9b3f75c89/feb02772%2D68f9%2D477c%2Da686%2D040e0073085d.%5Fci%2D4081.3.0%2Da%2Df89ceb891c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 15 12:49:09.972850 coreos-metadata[1648]: Jan 15 12:49:09.972 INFO Fetch successful Jan 15 12:49:09.972850 coreos-metadata[1648]: Jan 15 12:49:09.972 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:49:09.992434 coreos-metadata[1648]: Jan 15 12:49:09.988 INFO Fetch successful Jan 15 12:49:10.036886 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 12:49:10.054998 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 12:49:10.591119 locksmithd[1741]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 12:49:11.028234 sshd_keygen[1707]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 12:49:11.047793 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 12:49:11.061002 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 12:49:11.069172 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 15 12:49:11.077005 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 12:49:11.078768 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 12:49:11.095959 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 12:49:11.103675 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 15 12:49:11.121605 bash[1695]: Updated "/home/core/.ssh/authorized_keys" Jan 15 12:49:11.123106 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 12:49:11.131996 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 12:49:11.348928 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 12:49:11.361094 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 12:49:11.369037 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 15 12:49:11.377082 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 12:49:12.266753 containerd[1702]: time="2025-01-15T12:49:12.265617320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 12:49:12.289136 containerd[1702]: time="2025-01-15T12:49:12.289073920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:12.290704 containerd[1702]: time="2025-01-15T12:49:12.290601680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:12.290704 containerd[1702]: time="2025-01-15T12:49:12.290635520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 12:49:12.290704 containerd[1702]: time="2025-01-15T12:49:12.290651640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 12:49:12.290862 containerd[1702]: time="2025-01-15T12:49:12.290833960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 12:49:12.290889 containerd[1702]: time="2025-01-15T12:49:12.290861480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:12.290946 containerd[1702]: time="2025-01-15T12:49:12.290925760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:12.290971 containerd[1702]: time="2025-01-15T12:49:12.290944320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291141 containerd[1702]: time="2025-01-15T12:49:12.291118160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291141 containerd[1702]: time="2025-01-15T12:49:12.291138480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291221 containerd[1702]: time="2025-01-15T12:49:12.291152680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291221 containerd[1702]: time="2025-01-15T12:49:12.291163680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291258 containerd[1702]: time="2025-01-15T12:49:12.291232920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291463 containerd[1702]: time="2025-01-15T12:49:12.291434280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291574 containerd[1702]: time="2025-01-15T12:49:12.291551120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:12.291574 containerd[1702]: time="2025-01-15T12:49:12.291570840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 12:49:12.291669 containerd[1702]: time="2025-01-15T12:49:12.291647800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 12:49:12.291728 containerd[1702]: time="2025-01-15T12:49:12.291698400Z" level=info msg="metadata content store policy set" policy=shared Jan 15 12:49:12.728958 containerd[1702]: time="2025-01-15T12:49:12.728911680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 12:49:12.729081 containerd[1702]: time="2025-01-15T12:49:12.728984200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 12:49:12.729081 containerd[1702]: time="2025-01-15T12:49:12.729002200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 12:49:12.729081 containerd[1702]: time="2025-01-15T12:49:12.729018040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 12:49:12.729081 containerd[1702]: time="2025-01-15T12:49:12.729036520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 12:49:12.729249 containerd[1702]: time="2025-01-15T12:49:12.729214200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 12:49:12.729494 containerd[1702]: time="2025-01-15T12:49:12.729474040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 12:49:12.729604 containerd[1702]: time="2025-01-15T12:49:12.729584360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 12:49:12.729646 containerd[1702]: time="2025-01-15T12:49:12.729605920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 12:49:12.729646 containerd[1702]: time="2025-01-15T12:49:12.729618240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 12:49:12.729646 containerd[1702]: time="2025-01-15T12:49:12.729630920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729646 containerd[1702]: time="2025-01-15T12:49:12.729642720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729736 containerd[1702]: time="2025-01-15T12:49:12.729656040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729736 containerd[1702]: time="2025-01-15T12:49:12.729675760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729736 containerd[1702]: time="2025-01-15T12:49:12.729692800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729736 containerd[1702]: time="2025-01-15T12:49:12.729705600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729817 containerd[1702]: time="2025-01-15T12:49:12.729742960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729817 containerd[1702]: time="2025-01-15T12:49:12.729756960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 12:49:12.729817 containerd[1702]: time="2025-01-15T12:49:12.729775000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.729817 containerd[1702]: time="2025-01-15T12:49:12.729805360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.729886 containerd[1702]: time="2025-01-15T12:49:12.729816520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.729886 containerd[1702]: time="2025-01-15T12:49:12.729832640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.729886 containerd[1702]: time="2025-01-15T12:49:12.729844200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.729886 containerd[1702]: time="2025-01-15T12:49:12.729859240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.729886 containerd[1702]: time="2025-01-15T12:49:12.729869920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.729886 containerd[1702]: time="2025-01-15T12:49:12.729881800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730007 containerd[1702]: time="2025-01-15T12:49:12.729908320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730007 containerd[1702]: time="2025-01-15T12:49:12.729923480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730007 containerd[1702]: time="2025-01-15T12:49:12.729934400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730007 containerd[1702]: time="2025-01-15T12:49:12.729945520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730007 containerd[1702]: time="2025-01-15T12:49:12.729970160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730007 containerd[1702]: time="2025-01-15T12:49:12.729986480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 12:49:12.730007 containerd[1702]: time="2025-01-15T12:49:12.730005920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730020280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730033080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730087160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730103680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730113840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730128240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730140000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730156800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730166400Z" level=info msg="NRI interface is disabled by configuration." Jan 15 12:49:12.730150 containerd[1702]: time="2025-01-15T12:49:12.730176120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 12:49:12.730529 containerd[1702]: time="2025-01-15T12:49:12.730455840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 12:49:12.730529 containerd[1702]: time="2025-01-15T12:49:12.730530000Z" level=info msg="Connect containerd service" Jan 15 12:49:12.730703 containerd[1702]: time="2025-01-15T12:49:12.730556640Z" level=info msg="using legacy CRI server" Jan 15 12:49:12.730703 containerd[1702]: time="2025-01-15T12:49:12.730563880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 12:49:12.730703 containerd[1702]: time="2025-01-15T12:49:12.730681040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 12:49:12.732159 containerd[1702]: time="2025-01-15T12:49:12.732049320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732274480Z" level=info msg="Start subscribing containerd event" Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732606600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732620720Z" level=info msg="Start recovering state" Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732651800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732697760Z" level=info msg="Start event monitor" Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732708760Z" level=info msg="Start snapshots syncer" Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732734760Z" level=info msg="Start cni network conf syncer for default" Jan 15 12:49:12.739427 containerd[1702]: time="2025-01-15T12:49:12.732743920Z" level=info msg="Start streaming server" Jan 15 12:49:12.732906 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 12:49:12.741650 containerd[1702]: time="2025-01-15T12:49:12.741595480Z" level=info msg="containerd successfully booted in 0.476749s" Jan 15 12:49:12.742878 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 12:49:12.750759 systemd[1]: Startup finished in 709ms (kernel) + 12.396s (initrd) + 15.071s (userspace) = 28.178s. Jan 15 12:49:14.893929 login[1812]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 15 12:49:14.894709 login[1811]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:49:14.904889 systemd-logind[1665]: New session 1 of user core. Jan 15 12:49:14.907097 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 12:49:14.913953 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 12:49:14.994260 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 12:49:14.999964 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 12:49:15.006651 (systemd)[1823]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 12:49:15.895640 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:49:15.899626 systemd-logind[1665]: New session 2 of user core. Jan 15 12:49:16.368677 systemd[1823]: Queued start job for default target default.target. Jan 15 12:49:16.377635 systemd[1823]: Created slice app.slice - User Application Slice. Jan 15 12:49:16.377659 systemd[1823]: Reached target paths.target - Paths. Jan 15 12:49:16.377671 systemd[1823]: Reached target timers.target - Timers. Jan 15 12:49:16.378956 systemd[1823]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 12:49:16.389813 systemd[1823]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 12:49:16.389966 systemd[1823]: Reached target sockets.target - Sockets. Jan 15 12:49:16.389979 systemd[1823]: Reached target basic.target - Basic System. Jan 15 12:49:16.390019 systemd[1823]: Reached target default.target - Main User Target. Jan 15 12:49:16.390045 systemd[1823]: Startup finished in 1.378s. Jan 15 12:49:16.390123 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 12:49:16.397145 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 12:49:16.398588 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 12:49:19.656058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 12:49:19.664903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:21.717094 waagent[1807]: 2025-01-15T12:49:21.711179Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 15 12:49:21.717939 waagent[1807]: 2025-01-15T12:49:21.717864Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 15 12:49:21.723135 waagent[1807]: 2025-01-15T12:49:21.723067Z INFO Daemon Daemon Python: 3.11.9 Jan 15 12:49:21.728750 waagent[1807]: 2025-01-15T12:49:21.727901Z INFO Daemon Daemon Run daemon Jan 15 12:49:21.733590 waagent[1807]: 2025-01-15T12:49:21.733255Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 15 12:49:21.742930 waagent[1807]: 2025-01-15T12:49:21.742853Z INFO Daemon Daemon Using waagent for provisioning Jan 15 12:49:21.748588 waagent[1807]: 2025-01-15T12:49:21.748534Z INFO Daemon Daemon Activate resource disk Jan 15 12:49:21.753627 waagent[1807]: 2025-01-15T12:49:21.753573Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 15 12:49:21.765250 waagent[1807]: 2025-01-15T12:49:21.765172Z INFO Daemon Daemon Found device: None Jan 15 12:49:21.770102 waagent[1807]: 2025-01-15T12:49:21.770044Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 15 12:49:21.779325 waagent[1807]: 2025-01-15T12:49:21.779270Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 15 12:49:21.792966 waagent[1807]: 2025-01-15T12:49:21.792907Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:49:21.799146 waagent[1807]: 2025-01-15T12:49:21.799098Z INFO Daemon Daemon Running default provisioning handler Jan 15 12:49:21.811829 waagent[1807]: 2025-01-15T12:49:21.811267Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 15 12:49:21.825630 waagent[1807]: 2025-01-15T12:49:21.825566Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 15 12:49:21.837804 waagent[1807]: 2025-01-15T12:49:21.836922Z INFO Daemon Daemon cloud-init is enabled: False Jan 15 12:49:21.842387 waagent[1807]: 2025-01-15T12:49:21.842317Z INFO Daemon Daemon Copying ovf-env.xml Jan 15 12:49:23.141927 waagent[1807]: 2025-01-15T12:49:23.140834Z INFO Daemon Daemon Successfully mounted dvd Jan 15 12:49:23.199103 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 15 12:49:23.201378 waagent[1807]: 2025-01-15T12:49:23.201306Z INFO Daemon Daemon Detect protocol endpoint Jan 15 12:49:23.206595 waagent[1807]: 2025-01-15T12:49:23.206535Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:49:23.212685 waagent[1807]: 2025-01-15T12:49:23.212630Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 15 12:49:23.221740 waagent[1807]: 2025-01-15T12:49:23.220118Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 15 12:49:23.226008 waagent[1807]: 2025-01-15T12:49:23.225941Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 15 12:49:23.232179 waagent[1807]: 2025-01-15T12:49:23.232120Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 15 12:49:23.416851 waagent[1807]: 2025-01-15T12:49:23.416801Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 15 12:49:23.423954 waagent[1807]: 2025-01-15T12:49:23.423920Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 15 12:49:23.429523 waagent[1807]: 2025-01-15T12:49:23.429474Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 15 12:49:24.575744 waagent[1807]: 2025-01-15T12:49:24.575635Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 15 12:49:24.582902 waagent[1807]: 2025-01-15T12:49:24.582816Z INFO Daemon Daemon Forcing an update of the goal state. Jan 15 12:49:24.596691 waagent[1807]: 2025-01-15T12:49:24.596635Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:49:24.654962 waagent[1807]: 2025-01-15T12:49:24.654914Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 15 12:49:24.661286 waagent[1807]: 2025-01-15T12:49:24.661229Z INFO Daemon Jan 15 12:49:24.664476 waagent[1807]: 2025-01-15T12:49:24.664421Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0ad011a3-cb7b-4446-81cf-12ced9224606 eTag: 13348624351177755688 source: Fabric] Jan 15 12:49:24.676699 waagent[1807]: 2025-01-15T12:49:24.676649Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 15 12:49:24.684079 waagent[1807]: 2025-01-15T12:49:24.684031Z INFO Daemon Jan 15 12:49:24.687073 waagent[1807]: 2025-01-15T12:49:24.687030Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:49:24.698335 waagent[1807]: 2025-01-15T12:49:24.698295Z INFO Daemon Daemon Downloading artifacts profile blob Jan 15 12:49:24.784524 waagent[1807]: 2025-01-15T12:49:24.784430Z INFO Daemon Downloaded certificate {'thumbprint': 'A875143AB72B7B1C6E7C171DE123C54D4F19062F', 'hasPrivateKey': False} Jan 15 12:49:24.795306 waagent[1807]: 2025-01-15T12:49:24.795257Z INFO Daemon Downloaded certificate {'thumbprint': '49FC92B084EAF845A6BAAF5FB87BB1C67ED62E72', 'hasPrivateKey': True} Jan 15 12:49:24.806034 waagent[1807]: 2025-01-15T12:49:24.805982Z INFO Daemon Fetch goal state completed Jan 15 12:49:24.817858 waagent[1807]: 2025-01-15T12:49:24.817811Z INFO Daemon Daemon Starting provisioning Jan 15 12:49:24.823405 waagent[1807]: 2025-01-15T12:49:24.823345Z INFO Daemon Daemon Handle ovf-env.xml. Jan 15 12:49:24.828523 waagent[1807]: 2025-01-15T12:49:24.828442Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-f89ceb891c] Jan 15 12:49:25.300744 waagent[1807]: 2025-01-15T12:49:25.298487Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-f89ceb891c] Jan 15 12:49:25.305488 waagent[1807]: 2025-01-15T12:49:25.305418Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 15 12:49:25.312499 waagent[1807]: 2025-01-15T12:49:25.312439Z INFO Daemon Daemon Primary interface is [eth0] Jan 15 12:49:25.361493 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:25.361501 systemd-networkd[1331]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:25.361527 systemd-networkd[1331]: eth0: DHCP lease lost Jan 15 12:49:25.364745 waagent[1807]: 2025-01-15T12:49:25.362564Z INFO Daemon Daemon Create user account if not exists Jan 15 12:49:25.368704 waagent[1807]: 2025-01-15T12:49:25.368631Z INFO Daemon Daemon User core already exists, skip useradd Jan 15 12:49:25.369807 systemd-networkd[1331]: eth0: DHCPv6 lease lost Jan 15 12:49:25.375793 waagent[1807]: 2025-01-15T12:49:25.375689Z INFO Daemon Daemon Configure sudoer Jan 15 12:49:25.382034 waagent[1807]: 2025-01-15T12:49:25.381964Z INFO Daemon Daemon Configure sshd Jan 15 12:49:25.386782 waagent[1807]: 2025-01-15T12:49:25.386684Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 15 12:49:25.402751 waagent[1807]: 2025-01-15T12:49:25.400833Z INFO Daemon Daemon Deploy ssh public key. Jan 15 12:49:25.411770 systemd-networkd[1331]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:25.442689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:25.450140 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:25.497738 kubelet[1886]: E0115 12:49:25.496036 1886 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:25.499326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:25.499476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:25.503153 waagent[1807]: 2025-01-15T12:49:25.503097Z INFO Daemon Daemon Provisioning complete Jan 15 12:49:25.522834 waagent[1807]: 2025-01-15T12:49:25.522782Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 15 12:49:25.529551 waagent[1807]: 2025-01-15T12:49:25.529488Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 15 12:49:25.539931 waagent[1807]: 2025-01-15T12:49:25.539871Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 15 12:49:25.673089 waagent[1897]: 2025-01-15T12:49:25.672406Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 15 12:49:25.673089 waagent[1897]: 2025-01-15T12:49:25.672552Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 15 12:49:25.673089 waagent[1897]: 2025-01-15T12:49:25.672606Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 15 12:49:25.706776 waagent[1897]: 2025-01-15T12:49:25.706677Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 15 12:49:25.707113 waagent[1897]: 2025-01-15T12:49:25.707072Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:25.707254 waagent[1897]: 2025-01-15T12:49:25.707221Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:25.715629 waagent[1897]: 2025-01-15T12:49:25.715563Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:49:25.721945 waagent[1897]: 2025-01-15T12:49:25.721895Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 15 12:49:25.722615 waagent[1897]: 2025-01-15T12:49:25.722575Z INFO ExtHandler Jan 15 12:49:25.723747 waagent[1897]: 2025-01-15T12:49:25.722765Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 65d6ebcd-1fc6-4092-862b-cca526b2fd06 eTag: 13348624351177755688 source: Fabric] Jan 15 12:49:25.723747 waagent[1897]: 2025-01-15T12:49:25.723085Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 12:49:25.723747 waagent[1897]: 2025-01-15T12:49:25.723627Z INFO ExtHandler Jan 15 12:49:25.723747 waagent[1897]: 2025-01-15T12:49:25.723696Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:49:25.727844 waagent[1897]: 2025-01-15T12:49:25.727809Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 12:49:25.802743 waagent[1897]: 2025-01-15T12:49:25.802626Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A875143AB72B7B1C6E7C171DE123C54D4F19062F', 'hasPrivateKey': False} Jan 15 12:49:25.803170 waagent[1897]: 2025-01-15T12:49:25.803123Z INFO ExtHandler Downloaded certificate {'thumbprint': '49FC92B084EAF845A6BAAF5FB87BB1C67ED62E72', 'hasPrivateKey': True} Jan 15 12:49:25.803643 waagent[1897]: 2025-01-15T12:49:25.803596Z INFO ExtHandler Fetch goal state completed Jan 15 12:49:25.820562 waagent[1897]: 2025-01-15T12:49:25.820502Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1897 Jan 15 12:49:25.820734 waagent[1897]: 2025-01-15T12:49:25.820688Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 15 12:49:25.822443 waagent[1897]: 2025-01-15T12:49:25.822396Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 15 12:49:25.822870 waagent[1897]: 2025-01-15T12:49:25.822831Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 15 12:49:25.850272 waagent[1897]: 2025-01-15T12:49:25.850224Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 15 12:49:25.850475 waagent[1897]: 2025-01-15T12:49:25.850433Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 15 12:49:25.856585 waagent[1897]: 2025-01-15T12:49:25.856542Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 15 12:49:25.863045 systemd[1]: Reloading requested from client PID 1912 ('systemctl') (unit waagent.service)... Jan 15 12:49:25.863224 systemd[1]: Reloading... Jan 15 12:49:25.953752 zram_generator::config[1949]: No configuration found. Jan 15 12:49:26.044375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:26.119831 systemd[1]: Reloading finished in 256 ms. Jan 15 12:49:26.141255 waagent[1897]: 2025-01-15T12:49:26.140891Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 15 12:49:26.146288 systemd[1]: Reloading requested from client PID 2000 ('systemctl') (unit waagent.service)... Jan 15 12:49:26.146308 systemd[1]: Reloading... Jan 15 12:49:26.217107 zram_generator::config[2031]: No configuration found. Jan 15 12:49:26.340046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:26.415881 systemd[1]: Reloading finished in 269 ms. Jan 15 12:49:26.438772 waagent[1897]: 2025-01-15T12:49:26.437979Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 15 12:49:26.438772 waagent[1897]: 2025-01-15T12:49:26.438157Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 15 12:49:26.898762 waagent[1897]: 2025-01-15T12:49:26.897884Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 15 12:49:26.898762 waagent[1897]: 2025-01-15T12:49:26.898511Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 15 12:49:26.899876 waagent[1897]: 2025-01-15T12:49:26.899814Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 15 12:49:26.900009 waagent[1897]: 2025-01-15T12:49:26.899948Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:26.900555 waagent[1897]: 2025-01-15T12:49:26.900507Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:26.900696 waagent[1897]: 2025-01-15T12:49:26.900636Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 15 12:49:26.900989 waagent[1897]: 2025-01-15T12:49:26.900906Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 15 12:49:26.901184 waagent[1897]: 2025-01-15T12:49:26.901144Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:26.901588 waagent[1897]: 2025-01-15T12:49:26.901539Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 15 12:49:26.901889 waagent[1897]: 2025-01-15T12:49:26.901843Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 15 12:49:26.901889 waagent[1897]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 15 12:49:26.901889 waagent[1897]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 15 12:49:26.901889 waagent[1897]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 15 12:49:26.901889 waagent[1897]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:26.901889 waagent[1897]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:26.901889 waagent[1897]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:26.902353 waagent[1897]: 2025-01-15T12:49:26.902313Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:26.902493 waagent[1897]: 2025-01-15T12:49:26.902452Z INFO EnvHandler ExtHandler Configure routes Jan 15 12:49:26.902553 waagent[1897]: 2025-01-15T12:49:26.902522Z INFO EnvHandler ExtHandler Gateway:None Jan 15 12:49:26.902601 waagent[1897]: 2025-01-15T12:49:26.902574Z INFO EnvHandler ExtHandler Routes:None Jan 15 12:49:26.902907 waagent[1897]: 2025-01-15T12:49:26.902235Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 15 12:49:26.903210 waagent[1897]: 2025-01-15T12:49:26.903146Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 15 12:49:26.903279 waagent[1897]: 2025-01-15T12:49:26.903213Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 15 12:49:26.904253 waagent[1897]: 2025-01-15T12:49:26.903709Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 15 12:49:26.909242 waagent[1897]: 2025-01-15T12:49:26.909175Z INFO ExtHandler ExtHandler Jan 15 12:49:26.909347 waagent[1897]: 2025-01-15T12:49:26.909306Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3aad84a7-78a3-4c23-92b1-974961d669e7 correlation c73a6c80-5c3e-4d6a-bbd0-4ccb9bb3ae55 created: 2025-01-15T12:47:55.311234Z] Jan 15 12:49:26.910393 waagent[1897]: 2025-01-15T12:49:26.910327Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 12:49:26.912175 waagent[1897]: 2025-01-15T12:49:26.912115Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 15 12:49:26.947569 waagent[1897]: 2025-01-15T12:49:26.947495Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7D01E9C4-16C7-423D-9720-9DAC67A32AD5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 15 12:49:26.956239 waagent[1897]: 2025-01-15T12:49:26.956157Z INFO MonitorHandler ExtHandler Network interfaces: Jan 15 12:49:26.956239 waagent[1897]: Executing ['ip', '-a', '-o', 'link']: Jan 15 12:49:26.956239 waagent[1897]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 15 12:49:26.956239 waagent[1897]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:4f:ac brd ff:ff:ff:ff:ff:ff Jan 15 12:49:26.956239 waagent[1897]: 3: enP33283s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:4f:ac brd ff:ff:ff:ff:ff:ff\ altname enP33283p0s2 Jan 15 12:49:26.956239 waagent[1897]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 15 12:49:26.956239 waagent[1897]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 15 12:49:26.956239 waagent[1897]: 2: eth0 inet 10.200.20.39/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 15 12:49:26.956239 waagent[1897]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 15 12:49:26.956239 waagent[1897]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 15 12:49:26.956239 waagent[1897]: 2: eth0 inet6 fe80::222:48ff:febd:4fac/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:49:26.956239 waagent[1897]: 3: enP33283s1 inet6 fe80::222:48ff:febd:4fac/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:49:27.044116 waagent[1897]: 2025-01-15T12:49:27.044029Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 15 12:49:27.044116 waagent[1897]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:27.044116 waagent[1897]: pkts bytes target prot opt in out source destination Jan 15 12:49:27.044116 waagent[1897]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:27.044116 waagent[1897]: pkts bytes target prot opt in out source destination Jan 15 12:49:27.044116 waagent[1897]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:27.044116 waagent[1897]: pkts bytes target prot opt in out source destination Jan 15 12:49:27.044116 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:49:27.044116 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:49:27.044116 waagent[1897]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:49:27.047333 waagent[1897]: 2025-01-15T12:49:27.047265Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 15 12:49:27.047333 waagent[1897]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:27.047333 waagent[1897]: pkts bytes target prot opt in out source destination Jan 15 12:49:27.047333 waagent[1897]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:27.047333 waagent[1897]: pkts bytes target prot opt in out source destination Jan 15 12:49:27.047333 waagent[1897]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:27.047333 waagent[1897]: pkts bytes target prot opt in out source destination Jan 15 12:49:27.047333 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:49:27.047333 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:49:27.047333 waagent[1897]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:49:27.047578 waagent[1897]: 2025-01-15T12:49:27.047538Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 15 12:49:32.352286 chronyd[1697]: Selected source PHC0 Jan 15 12:49:35.656069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 12:49:35.663897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:35.804819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:35.808865 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:35.848543 kubelet[2130]: E0115 12:49:35.848489 2130 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:35.850730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:35.850860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:45.906281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 12:49:45.914025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:46.201847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:46.205793 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:46.244856 kubelet[2146]: E0115 12:49:46.244789 2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:46.247538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:46.247831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:51.266557 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 15 12:49:54.602639 update_engine[1668]: I20250115 12:49:54.602548 1668 update_attempter.cc:509] Updating boot flags... Jan 15 12:49:54.664738 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2166) Jan 15 12:49:54.775284 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2168) Jan 15 12:49:56.406116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 12:49:56.414899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:56.702438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:56.712972 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:56.753461 kubelet[2228]: E0115 12:49:56.753383 2228 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:56.756106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:56.756364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:02.422814 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 12:50:02.423984 systemd[1]: Started sshd@0-10.200.20.39:22-10.200.16.10:44042.service - OpenSSH per-connection server daemon (10.200.16.10:44042). Jan 15 12:50:02.911139 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 44042 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:02.912473 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:02.916237 systemd-logind[1665]: New session 3 of user core. Jan 15 12:50:02.924038 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 12:50:03.300382 systemd[1]: Started sshd@1-10.200.20.39:22-10.200.16.10:44044.service - OpenSSH per-connection server daemon (10.200.16.10:44044). Jan 15 12:50:03.714009 sshd[2242]: Accepted publickey for core from 10.200.16.10 port 44044 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:03.715373 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:03.719195 systemd-logind[1665]: New session 4 of user core. Jan 15 12:50:03.728888 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 12:50:04.035962 sshd[2242]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:04.038563 systemd[1]: sshd@1-10.200.20.39:22-10.200.16.10:44044.service: Deactivated successfully. Jan 15 12:50:04.040209 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 12:50:04.042598 systemd-logind[1665]: Session 4 logged out. Waiting for processes to exit. Jan 15 12:50:04.043858 systemd-logind[1665]: Removed session 4. Jan 15 12:50:04.122964 systemd[1]: Started sshd@2-10.200.20.39:22-10.200.16.10:44056.service - OpenSSH per-connection server daemon (10.200.16.10:44056). Jan 15 12:50:04.571645 sshd[2249]: Accepted publickey for core from 10.200.16.10 port 44056 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:04.573019 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:04.576831 systemd-logind[1665]: New session 5 of user core. Jan 15 12:50:04.583902 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 12:50:04.891320 sshd[2249]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:04.894409 systemd-logind[1665]: Session 5 logged out. Waiting for processes to exit. Jan 15 12:50:04.894553 systemd[1]: sshd@2-10.200.20.39:22-10.200.16.10:44056.service: Deactivated successfully. Jan 15 12:50:04.896519 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 12:50:04.898302 systemd-logind[1665]: Removed session 5. Jan 15 12:50:04.972613 systemd[1]: Started sshd@3-10.200.20.39:22-10.200.16.10:44060.service - OpenSSH per-connection server daemon (10.200.16.10:44060). Jan 15 12:50:05.421001 sshd[2256]: Accepted publickey for core from 10.200.16.10 port 44060 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:05.422360 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:05.426184 systemd-logind[1665]: New session 6 of user core. Jan 15 12:50:05.437093 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 12:50:05.761431 sshd[2256]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:05.765202 systemd[1]: sshd@3-10.200.20.39:22-10.200.16.10:44060.service: Deactivated successfully. Jan 15 12:50:05.767824 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 12:50:05.768643 systemd-logind[1665]: Session 6 logged out. Waiting for processes to exit. Jan 15 12:50:05.769616 systemd-logind[1665]: Removed session 6. Jan 15 12:50:05.853757 systemd[1]: Started sshd@4-10.200.20.39:22-10.200.16.10:60338.service - OpenSSH per-connection server daemon (10.200.16.10:60338). Jan 15 12:50:06.271843 sshd[2263]: Accepted publickey for core from 10.200.16.10 port 60338 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:06.273127 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:06.276873 systemd-logind[1665]: New session 7 of user core. Jan 15 12:50:06.283889 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 12:50:06.660817 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 12:50:06.661098 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:06.675486 sudo[2266]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:06.758217 sshd[2263]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:06.761132 systemd[1]: sshd@4-10.200.20.39:22-10.200.16.10:60338.service: Deactivated successfully. Jan 15 12:50:06.762895 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 12:50:06.763838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 15 12:50:06.765926 systemd-logind[1665]: Session 7 logged out. Waiting for processes to exit. Jan 15 12:50:06.769925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:06.770903 systemd-logind[1665]: Removed session 7. Jan 15 12:50:06.833993 systemd[1]: Started sshd@5-10.200.20.39:22-10.200.16.10:60344.service - OpenSSH per-connection server daemon (10.200.16.10:60344). Jan 15 12:50:07.159918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:07.164265 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:07.202773 kubelet[2281]: E0115 12:50:07.202684 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:07.205386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:07.205654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:07.252085 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 60344 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:07.253079 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:07.257923 systemd-logind[1665]: New session 8 of user core. Jan 15 12:50:07.265954 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 12:50:07.490044 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 12:50:07.490322 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:07.493426 sudo[2291]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:07.497983 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 12:50:07.498235 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:07.508960 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 12:50:07.512448 auditctl[2294]: No rules Jan 15 12:50:07.512772 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 12:50:07.512963 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 12:50:07.515514 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:50:07.547702 augenrules[2312]: No rules Jan 15 12:50:07.548291 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:50:07.550852 sudo[2290]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:07.624288 sshd[2274]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:07.627035 systemd[1]: sshd@5-10.200.20.39:22-10.200.16.10:60344.service: Deactivated successfully. Jan 15 12:50:07.628669 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 12:50:07.630099 systemd-logind[1665]: Session 8 logged out. Waiting for processes to exit. Jan 15 12:50:07.631369 systemd-logind[1665]: Removed session 8. Jan 15 12:50:07.701227 systemd[1]: Started sshd@6-10.200.20.39:22-10.200.16.10:60350.service - OpenSSH per-connection server daemon (10.200.16.10:60350). Jan 15 12:50:08.119982 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 60350 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:08.121386 sshd[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:08.126213 systemd-logind[1665]: New session 9 of user core. Jan 15 12:50:08.136977 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 12:50:08.361621 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 12:50:08.362245 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:09.213968 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 12:50:09.215124 (dockerd)[2339]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 12:50:09.812568 dockerd[2339]: time="2025-01-15T12:50:09.812104060Z" level=info msg="Starting up" Jan 15 12:50:10.195868 dockerd[2339]: time="2025-01-15T12:50:10.195827718Z" level=info msg="Loading containers: start." Jan 15 12:50:10.313508 kernel: Initializing XFRM netlink socket Jan 15 12:50:10.443578 systemd-networkd[1331]: docker0: Link UP Jan 15 12:50:10.461897 dockerd[2339]: time="2025-01-15T12:50:10.461794171Z" level=info msg="Loading containers: done." Jan 15 12:50:10.485361 dockerd[2339]: time="2025-01-15T12:50:10.485239284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 12:50:10.485361 dockerd[2339]: time="2025-01-15T12:50:10.485354284Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 12:50:10.485546 dockerd[2339]: time="2025-01-15T12:50:10.485463685Z" level=info msg="Daemon has completed initialization" Jan 15 12:50:10.532503 dockerd[2339]: time="2025-01-15T12:50:10.532136110Z" level=info msg="API listen on /run/docker.sock" Jan 15 12:50:10.532425 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 12:50:12.043423 containerd[1702]: time="2025-01-15T12:50:12.043373189Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 15 12:50:12.949399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818878568.mount: Deactivated successfully. Jan 15 12:50:14.315890 containerd[1702]: time="2025-01-15T12:50:14.315835143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:14.318489 containerd[1702]: time="2025-01-15T12:50:14.318452946Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864010" Jan 15 12:50:14.323943 containerd[1702]: time="2025-01-15T12:50:14.323881392Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:14.332841 containerd[1702]: time="2025-01-15T12:50:14.332775522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:14.334203 containerd[1702]: time="2025-01-15T12:50:14.333976963Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.290560614s" Jan 15 12:50:14.334203 containerd[1702]: time="2025-01-15T12:50:14.334014244Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 15 12:50:14.353974 containerd[1702]: time="2025-01-15T12:50:14.353932186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 15 12:50:16.086343 containerd[1702]: time="2025-01-15T12:50:16.086284261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:16.089408 containerd[1702]: time="2025-01-15T12:50:16.089365225Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900694" Jan 15 12:50:16.092761 containerd[1702]: time="2025-01-15T12:50:16.092700148Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:16.098244 containerd[1702]: time="2025-01-15T12:50:16.098189915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:16.099381 containerd[1702]: time="2025-01-15T12:50:16.099261276Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 1.74528753s" Jan 15 12:50:16.099381 containerd[1702]: time="2025-01-15T12:50:16.099296396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 15 12:50:16.118511 containerd[1702]: time="2025-01-15T12:50:16.118467058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 15 12:50:17.284200 containerd[1702]: time="2025-01-15T12:50:17.284155320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:17.291594 containerd[1702]: time="2025-01-15T12:50:17.291549569Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164332" Jan 15 12:50:17.297432 containerd[1702]: time="2025-01-15T12:50:17.297385056Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:17.304564 containerd[1702]: time="2025-01-15T12:50:17.304427304Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.185837166s" Jan 15 12:50:17.304564 containerd[1702]: time="2025-01-15T12:50:17.304473784Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 15 12:50:17.307203 containerd[1702]: time="2025-01-15T12:50:17.306153626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:17.326415 containerd[1702]: time="2025-01-15T12:50:17.326375329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 15 12:50:17.406042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 15 12:50:17.414921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:17.518829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:17.523157 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:17.563206 kubelet[2562]: E0115 12:50:17.563080 2562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:17.565953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:17.566225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:18.849662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928301920.mount: Deactivated successfully. Jan 15 12:50:19.657011 containerd[1702]: time="2025-01-15T12:50:19.656952732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:19.659808 containerd[1702]: time="2025-01-15T12:50:19.659647456Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Jan 15 12:50:19.667211 containerd[1702]: time="2025-01-15T12:50:19.667155904Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:19.674465 containerd[1702]: time="2025-01-15T12:50:19.674406792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:19.675438 containerd[1702]: time="2025-01-15T12:50:19.674986433Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 2.348392224s" Jan 15 12:50:19.675438 containerd[1702]: time="2025-01-15T12:50:19.675023593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 15 12:50:19.693773 containerd[1702]: time="2025-01-15T12:50:19.693730655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 12:50:20.318141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689574762.mount: Deactivated successfully. Jan 15 12:50:21.225550 containerd[1702]: time="2025-01-15T12:50:21.225494442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.229090 containerd[1702]: time="2025-01-15T12:50:21.228845246Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 15 12:50:21.232066 containerd[1702]: time="2025-01-15T12:50:21.232036490Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.237796 containerd[1702]: time="2025-01-15T12:50:21.237759217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.239163 containerd[1702]: time="2025-01-15T12:50:21.238825458Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.545051443s" Jan 15 12:50:21.239163 containerd[1702]: time="2025-01-15T12:50:21.238858898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 15 12:50:21.257866 containerd[1702]: time="2025-01-15T12:50:21.257779401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 15 12:50:21.927483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350655707.mount: Deactivated successfully. Jan 15 12:50:21.948663 containerd[1702]: time="2025-01-15T12:50:21.948608337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.951091 containerd[1702]: time="2025-01-15T12:50:21.950840619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 15 12:50:21.956409 containerd[1702]: time="2025-01-15T12:50:21.956361426Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.960840 containerd[1702]: time="2025-01-15T12:50:21.960780391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.961948 containerd[1702]: time="2025-01-15T12:50:21.961455912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 703.604511ms" Jan 15 12:50:21.961948 containerd[1702]: time="2025-01-15T12:50:21.961490232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 15 12:50:21.980435 containerd[1702]: time="2025-01-15T12:50:21.980308974Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 15 12:50:22.567925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939261316.mount: Deactivated successfully. Jan 15 12:50:26.303762 containerd[1702]: time="2025-01-15T12:50:26.303510281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:26.306791 containerd[1702]: time="2025-01-15T12:50:26.306750325Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 15 12:50:26.310401 containerd[1702]: time="2025-01-15T12:50:26.310371169Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:26.316602 containerd[1702]: time="2025-01-15T12:50:26.316544256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:26.317888 containerd[1702]: time="2025-01-15T12:50:26.317740338Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.337375324s" Jan 15 12:50:26.317888 containerd[1702]: time="2025-01-15T12:50:26.317780618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 15 12:50:27.656075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 15 12:50:27.668173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:27.804922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:27.808922 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:27.861556 kubelet[2751]: E0115 12:50:27.861514 2751 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:27.865085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:27.865235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:30.412939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:30.422032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:30.437022 systemd[1]: Reloading requested from client PID 2767 ('systemctl') (unit session-9.scope)... Jan 15 12:50:30.437035 systemd[1]: Reloading... Jan 15 12:50:30.551750 zram_generator::config[2816]: No configuration found. Jan 15 12:50:30.648071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:50:30.722837 systemd[1]: Reloading finished in 285 ms. Jan 15 12:50:30.773122 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 12:50:30.773204 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 12:50:30.773465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:30.775038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:30.914764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:30.923007 (kubelet)[2874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:50:30.961037 kubelet[2874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:30.961037 kubelet[2874]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:50:30.961037 kubelet[2874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:30.961386 kubelet[2874]: I0115 12:50:30.961030 2874 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:50:32.096958 kubelet[2874]: I0115 12:50:32.096917 2874 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 15 12:50:32.096958 kubelet[2874]: I0115 12:50:32.096948 2874 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:50:32.097353 kubelet[2874]: I0115 12:50:32.097148 2874 server.go:927] "Client rotation is on, will bootstrap in background" Jan 15 12:50:32.108986 kubelet[2874]: E0115 12:50:32.108946 2874 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.110219 kubelet[2874]: I0115 12:50:32.110117 2874 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:50:32.123222 kubelet[2874]: I0115 12:50:32.123188 2874 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:50:32.124333 kubelet[2874]: I0115 12:50:32.124284 2874 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:50:32.124523 kubelet[2874]: I0115 12:50:32.124336 2874 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-f89ceb891c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 12:50:32.124610 kubelet[2874]: I0115 12:50:32.124534 2874 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:50:32.124610 kubelet[2874]: I0115 12:50:32.124544 2874 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 12:50:32.124701 kubelet[2874]: I0115 12:50:32.124679 2874 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:32.125473 kubelet[2874]: I0115 12:50:32.125450 2874 kubelet.go:400] "Attempting to sync node with API server" Jan 15 12:50:32.125516 kubelet[2874]: I0115 12:50:32.125477 2874 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:50:32.125516 kubelet[2874]: I0115 12:50:32.125506 2874 kubelet.go:312] "Adding apiserver pod source" Jan 15 12:50:32.125562 kubelet[2874]: I0115 12:50:32.125522 2874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:50:32.126706 kubelet[2874]: W0115 12:50:32.126545 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-f89ceb891c&limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.126706 kubelet[2874]: E0115 12:50:32.126605 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-f89ceb891c&limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.126706 kubelet[2874]: W0115 12:50:32.126664 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.126706 kubelet[2874]: E0115 12:50:32.126690 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.128443 kubelet[2874]: I0115 12:50:32.127253 2874 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:50:32.128443 kubelet[2874]: I0115 12:50:32.127428 2874 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:50:32.128443 kubelet[2874]: W0115 12:50:32.127471 2874 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 12:50:32.128443 kubelet[2874]: I0115 12:50:32.128284 2874 server.go:1264] "Started kubelet" Jan 15 12:50:32.130065 kubelet[2874]: I0115 12:50:32.130043 2874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:50:32.135969 kubelet[2874]: I0115 12:50:32.135859 2874 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:50:32.136288 kubelet[2874]: E0115 12:50:32.136270 2874 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 12:50:32.136547 kubelet[2874]: I0115 12:50:32.136497 2874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:50:32.136901 kubelet[2874]: I0115 12:50:32.136882 2874 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:50:32.137095 kubelet[2874]: I0115 12:50:32.137064 2874 server.go:455] "Adding debug handlers to kubelet server" Jan 15 12:50:32.137918 kubelet[2874]: I0115 12:50:32.137893 2874 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 12:50:32.141015 kubelet[2874]: E0115 12:50:32.140879 2874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-f89ceb891c.181adeaaf43e069c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-f89ceb891c,UID:ci-4081.3.0-a-f89ceb891c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-f89ceb891c,},FirstTimestamp:2025-01-15 12:50:32.12825974 +0000 UTC m=+1.202258667,LastTimestamp:2025-01-15 12:50:32.12825974 +0000 UTC m=+1.202258667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-f89ceb891c,}" Jan 15 12:50:32.141264 kubelet[2874]: E0115 12:50:32.141234 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-f89ceb891c?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="200ms" Jan 15 12:50:32.141528 kubelet[2874]: I0115 12:50:32.141509 2874 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:50:32.141666 kubelet[2874]: I0115 12:50:32.141649 2874 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:50:32.141967 kubelet[2874]: I0115 12:50:32.141776 2874 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 15 12:50:32.143357 kubelet[2874]: I0115 12:50:32.143321 2874 reconciler.go:26] "Reconciler: start to sync state" Jan 15 12:50:32.143754 kubelet[2874]: W0115 12:50:32.143683 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.143822 kubelet[2874]: E0115 12:50:32.143779 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.144006 kubelet[2874]: I0115 12:50:32.143976 2874 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:50:32.199626 kubelet[2874]: I0115 12:50:32.199576 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:50:32.201865 kubelet[2874]: I0115 12:50:32.201842 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:50:32.202010 kubelet[2874]: I0115 12:50:32.201999 2874 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:50:32.202340 kubelet[2874]: I0115 12:50:32.202328 2874 kubelet.go:2337] "Starting kubelet main sync loop" Jan 15 12:50:32.202503 kubelet[2874]: E0115 12:50:32.202461 2874 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:50:32.205066 kubelet[2874]: W0115 12:50:32.205021 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.205066 kubelet[2874]: E0115 12:50:32.205067 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.206566 kubelet[2874]: I0115 12:50:32.206535 2874 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:50:32.206566 kubelet[2874]: I0115 12:50:32.206557 2874 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:50:32.206682 kubelet[2874]: I0115 12:50:32.206575 2874 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:32.210704 kubelet[2874]: I0115 12:50:32.210672 2874 policy_none.go:49] "None policy: Start" Jan 15 12:50:32.211611 kubelet[2874]: I0115 12:50:32.211552 2874 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:50:32.211611 kubelet[2874]: I0115 12:50:32.211584 2874 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:50:32.220361 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 12:50:32.231907 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 12:50:32.235183 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 12:50:32.240016 kubelet[2874]: I0115 12:50:32.239980 2874 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.241747 kubelet[2874]: E0115 12:50:32.240680 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.241747 kubelet[2874]: I0115 12:50:32.240999 2874 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:50:32.241747 kubelet[2874]: I0115 12:50:32.241194 2874 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 12:50:32.241747 kubelet[2874]: I0115 12:50:32.241306 2874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:50:32.243603 kubelet[2874]: E0115 12:50:32.243567 2874 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-f89ceb891c\" not found" Jan 15 12:50:32.305105 kubelet[2874]: I0115 12:50:32.305045 2874 topology_manager.go:215] "Topology Admit Handler" podUID="d23128fe1a23cbbba9284158943d9311" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.306924 kubelet[2874]: I0115 12:50:32.306887 2874 topology_manager.go:215] "Topology Admit Handler" podUID="d2dd0df29afca6e016a2e4d4e0b30b54" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.308633 kubelet[2874]: I0115 12:50:32.308466 2874 topology_manager.go:215] "Topology Admit Handler" podUID="5f1e702d3783eaea0ea581588038c2fd" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.315325 systemd[1]: Created slice kubepods-burstable-podd23128fe1a23cbbba9284158943d9311.slice - libcontainer container kubepods-burstable-podd23128fe1a23cbbba9284158943d9311.slice. Jan 15 12:50:32.330333 systemd[1]: Created slice kubepods-burstable-podd2dd0df29afca6e016a2e4d4e0b30b54.slice - libcontainer container kubepods-burstable-podd2dd0df29afca6e016a2e4d4e0b30b54.slice. Jan 15 12:50:32.337397 systemd[1]: Created slice kubepods-burstable-pod5f1e702d3783eaea0ea581588038c2fd.slice - libcontainer container kubepods-burstable-pod5f1e702d3783eaea0ea581588038c2fd.slice. Jan 15 12:50:32.342968 kubelet[2874]: E0115 12:50:32.342924 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-f89ceb891c?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="400ms" Jan 15 12:50:32.343927 kubelet[2874]: I0115 12:50:32.343894 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d23128fe1a23cbbba9284158943d9311-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-f89ceb891c\" (UID: \"d23128fe1a23cbbba9284158943d9311\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.343972 kubelet[2874]: I0115 12:50:32.343935 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d23128fe1a23cbbba9284158943d9311-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-f89ceb891c\" (UID: \"d23128fe1a23cbbba9284158943d9311\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.343972 kubelet[2874]: I0115 12:50:32.343955 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.344026 kubelet[2874]: I0115 12:50:32.343974 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.344026 kubelet[2874]: I0115 12:50:32.343990 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f1e702d3783eaea0ea581588038c2fd-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-f89ceb891c\" (UID: \"5f1e702d3783eaea0ea581588038c2fd\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.344026 kubelet[2874]: I0115 12:50:32.344005 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.344026 kubelet[2874]: I0115 12:50:32.344024 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.344110 kubelet[2874]: I0115 12:50:32.344041 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.344110 kubelet[2874]: I0115 12:50:32.344060 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d23128fe1a23cbbba9284158943d9311-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-f89ceb891c\" (UID: \"d23128fe1a23cbbba9284158943d9311\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.442784 kubelet[2874]: I0115 12:50:32.442696 2874 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.443515 kubelet[2874]: E0115 12:50:32.443481 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.628403 containerd[1702]: time="2025-01-15T12:50:32.628297013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-f89ceb891c,Uid:d23128fe1a23cbbba9284158943d9311,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:32.634385 containerd[1702]: time="2025-01-15T12:50:32.634067780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-f89ceb891c,Uid:d2dd0df29afca6e016a2e4d4e0b30b54,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:32.640921 containerd[1702]: time="2025-01-15T12:50:32.640777588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-f89ceb891c,Uid:5f1e702d3783eaea0ea581588038c2fd,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:32.752008 kubelet[2874]: E0115 12:50:32.743970 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-f89ceb891c?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="800ms" Jan 15 12:50:32.846308 kubelet[2874]: I0115 12:50:32.846251 2874 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.846628 kubelet[2874]: E0115 12:50:32.846596 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:32.987633 kubelet[2874]: W0115 12:50:32.987570 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:32.987633 kubelet[2874]: E0115 12:50:32.987636 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:33.271569 kubelet[2874]: W0115 12:50:33.271500 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:33.271569 kubelet[2874]: E0115 12:50:33.271570 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:33.308811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866622772.mount: Deactivated successfully. Jan 15 12:50:33.344657 containerd[1702]: time="2025-01-15T12:50:33.344241103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:33.350195 containerd[1702]: time="2025-01-15T12:50:33.350151430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 15 12:50:33.353240 containerd[1702]: time="2025-01-15T12:50:33.353194913Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:33.356827 containerd[1702]: time="2025-01-15T12:50:33.356039077Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:33.362967 containerd[1702]: time="2025-01-15T12:50:33.362911325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:50:33.366269 containerd[1702]: time="2025-01-15T12:50:33.366223689Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:33.368434 containerd[1702]: time="2025-01-15T12:50:33.368367091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:50:33.372842 containerd[1702]: time="2025-01-15T12:50:33.372806256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:33.373976 containerd[1702]: time="2025-01-15T12:50:33.373525017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 739.346957ms" Jan 15 12:50:33.375230 containerd[1702]: time="2025-01-15T12:50:33.375197099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 746.820246ms" Jan 15 12:50:33.378190 containerd[1702]: time="2025-01-15T12:50:33.378154703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 737.299115ms" Jan 15 12:50:33.412395 kubelet[2874]: W0115 12:50:33.412302 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-f89ceb891c&limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:33.412395 kubelet[2874]: E0115 12:50:33.412373 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-f89ceb891c&limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:33.545348 kubelet[2874]: E0115 12:50:33.545219 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-f89ceb891c?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="1.6s" Jan 15 12:50:33.648614 kubelet[2874]: I0115 12:50:33.648579 2874 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:33.648948 kubelet[2874]: E0115 12:50:33.648915 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:33.746133 kubelet[2874]: W0115 12:50:33.746068 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:33.746133 kubelet[2874]: E0115 12:50:33.746114 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:34.226010 containerd[1702]: time="2025-01-15T12:50:34.225708508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:34.226010 containerd[1702]: time="2025-01-15T12:50:34.225815388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:34.226010 containerd[1702]: time="2025-01-15T12:50:34.225830468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:34.227711 containerd[1702]: time="2025-01-15T12:50:34.227598070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:34.232275 containerd[1702]: time="2025-01-15T12:50:34.232082116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:34.232275 containerd[1702]: time="2025-01-15T12:50:34.232132836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:34.232275 containerd[1702]: time="2025-01-15T12:50:34.232159596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:34.232275 containerd[1702]: time="2025-01-15T12:50:34.232249516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:34.236814 containerd[1702]: time="2025-01-15T12:50:34.236415641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:34.236814 containerd[1702]: time="2025-01-15T12:50:34.236467321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:34.236814 containerd[1702]: time="2025-01-15T12:50:34.236478761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:34.236814 containerd[1702]: time="2025-01-15T12:50:34.236551881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:34.248419 kubelet[2874]: E0115 12:50:34.248378 2874 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.39:6443: connect: connection refused Jan 15 12:50:34.267945 systemd[1]: Started cri-containerd-89de9341fa65219559598fe6a8498dbdb1702c515e69345f27236043f71aa6f4.scope - libcontainer container 89de9341fa65219559598fe6a8498dbdb1702c515e69345f27236043f71aa6f4. Jan 15 12:50:34.273265 systemd[1]: Started cri-containerd-177ceee89d8f182f3b6fc3fa8a6790f0343b06eb964c22d6eac563aeffa274ca.scope - libcontainer container 177ceee89d8f182f3b6fc3fa8a6790f0343b06eb964c22d6eac563aeffa274ca. Jan 15 12:50:34.275906 systemd[1]: Started cri-containerd-8b3c508f484810aa0d0ffc7366609c1040f2ac3a2112d0d03256cd8406d7771c.scope - libcontainer container 8b3c508f484810aa0d0ffc7366609c1040f2ac3a2112d0d03256cd8406d7771c. Jan 15 12:50:34.321461 containerd[1702]: time="2025-01-15T12:50:34.320710661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-f89ceb891c,Uid:d23128fe1a23cbbba9284158943d9311,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b3c508f484810aa0d0ffc7366609c1040f2ac3a2112d0d03256cd8406d7771c\"" Jan 15 12:50:34.330095 containerd[1702]: time="2025-01-15T12:50:34.330047112Z" level=info msg="CreateContainer within sandbox \"8b3c508f484810aa0d0ffc7366609c1040f2ac3a2112d0d03256cd8406d7771c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 12:50:34.334049 containerd[1702]: time="2025-01-15T12:50:34.334011917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-f89ceb891c,Uid:d2dd0df29afca6e016a2e4d4e0b30b54,Namespace:kube-system,Attempt:0,} returns sandbox id \"177ceee89d8f182f3b6fc3fa8a6790f0343b06eb964c22d6eac563aeffa274ca\"" Jan 15 12:50:34.341215 containerd[1702]: time="2025-01-15T12:50:34.341166485Z" level=info msg="CreateContainer within sandbox \"177ceee89d8f182f3b6fc3fa8a6790f0343b06eb964c22d6eac563aeffa274ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 12:50:34.342372 containerd[1702]: time="2025-01-15T12:50:34.341956806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-f89ceb891c,Uid:5f1e702d3783eaea0ea581588038c2fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"89de9341fa65219559598fe6a8498dbdb1702c515e69345f27236043f71aa6f4\"" Jan 15 12:50:34.345454 containerd[1702]: time="2025-01-15T12:50:34.345414690Z" level=info msg="CreateContainer within sandbox \"89de9341fa65219559598fe6a8498dbdb1702c515e69345f27236043f71aa6f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 12:50:34.367049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206220426.mount: Deactivated successfully. Jan 15 12:50:34.406426 containerd[1702]: time="2025-01-15T12:50:34.406181602Z" level=info msg="CreateContainer within sandbox \"8b3c508f484810aa0d0ffc7366609c1040f2ac3a2112d0d03256cd8406d7771c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b3898bec069efe284db1b1115fba5f7a80744b4095d0f90c44abb7e87571137\"" Jan 15 12:50:34.409862 containerd[1702]: time="2025-01-15T12:50:34.409820647Z" level=info msg="CreateContainer within sandbox \"177ceee89d8f182f3b6fc3fa8a6790f0343b06eb964c22d6eac563aeffa274ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fdb53c643b48058b2f5bb19f0bee5757f771c41aa5e98c9d82ad7ea749a7c379\"" Jan 15 12:50:34.410204 containerd[1702]: time="2025-01-15T12:50:34.410167847Z" level=info msg="StartContainer for \"2b3898bec069efe284db1b1115fba5f7a80744b4095d0f90c44abb7e87571137\"" Jan 15 12:50:34.413643 containerd[1702]: time="2025-01-15T12:50:34.412876050Z" level=info msg="CreateContainer within sandbox \"89de9341fa65219559598fe6a8498dbdb1702c515e69345f27236043f71aa6f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca1757d399a359191f0862efb734b0139c4bbdd333dcd9c921503595e59b5bae\"" Jan 15 12:50:34.413643 containerd[1702]: time="2025-01-15T12:50:34.413031010Z" level=info msg="StartContainer for \"fdb53c643b48058b2f5bb19f0bee5757f771c41aa5e98c9d82ad7ea749a7c379\"" Jan 15 12:50:34.420385 containerd[1702]: time="2025-01-15T12:50:34.420349259Z" level=info msg="StartContainer for \"ca1757d399a359191f0862efb734b0139c4bbdd333dcd9c921503595e59b5bae\"" Jan 15 12:50:34.442058 systemd[1]: Started cri-containerd-2b3898bec069efe284db1b1115fba5f7a80744b4095d0f90c44abb7e87571137.scope - libcontainer container 2b3898bec069efe284db1b1115fba5f7a80744b4095d0f90c44abb7e87571137. Jan 15 12:50:34.448937 systemd[1]: Started cri-containerd-fdb53c643b48058b2f5bb19f0bee5757f771c41aa5e98c9d82ad7ea749a7c379.scope - libcontainer container fdb53c643b48058b2f5bb19f0bee5757f771c41aa5e98c9d82ad7ea749a7c379. Jan 15 12:50:34.469930 systemd[1]: Started cri-containerd-ca1757d399a359191f0862efb734b0139c4bbdd333dcd9c921503595e59b5bae.scope - libcontainer container ca1757d399a359191f0862efb734b0139c4bbdd333dcd9c921503595e59b5bae. Jan 15 12:50:34.498013 containerd[1702]: time="2025-01-15T12:50:34.497744671Z" level=info msg="StartContainer for \"2b3898bec069efe284db1b1115fba5f7a80744b4095d0f90c44abb7e87571137\" returns successfully" Jan 15 12:50:34.522204 containerd[1702]: time="2025-01-15T12:50:34.522148020Z" level=info msg="StartContainer for \"ca1757d399a359191f0862efb734b0139c4bbdd333dcd9c921503595e59b5bae\" returns successfully" Jan 15 12:50:34.534345 containerd[1702]: time="2025-01-15T12:50:34.534299594Z" level=info msg="StartContainer for \"fdb53c643b48058b2f5bb19f0bee5757f771c41aa5e98c9d82ad7ea749a7c379\" returns successfully" Jan 15 12:50:35.251813 kubelet[2874]: I0115 12:50:35.250528 2874 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:35.310080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323369791.mount: Deactivated successfully. Jan 15 12:50:36.873955 kubelet[2874]: E0115 12:50:36.873909 2874 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-f89ceb891c\" not found" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:36.989598 kubelet[2874]: I0115 12:50:36.989554 2874 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:37.009295 kubelet[2874]: E0115 12:50:37.009192 2874 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-f89ceb891c.181adeaaf43e069c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-f89ceb891c,UID:ci-4081.3.0-a-f89ceb891c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-f89ceb891c,},FirstTimestamp:2025-01-15 12:50:32.12825974 +0000 UTC m=+1.202258667,LastTimestamp:2025-01-15 12:50:32.12825974 +0000 UTC m=+1.202258667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-f89ceb891c,}" Jan 15 12:50:37.102052 kubelet[2874]: E0115 12:50:37.101938 2874 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-f89ceb891c.181adeaaf4b8123e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-f89ceb891c,UID:ci-4081.3.0-a-f89ceb891c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-f89ceb891c,},FirstTimestamp:2025-01-15 12:50:32.13625811 +0000 UTC m=+1.210256917,LastTimestamp:2025-01-15 12:50:32.13625811 +0000 UTC m=+1.210256917,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-f89ceb891c,}" Jan 15 12:50:37.129139 kubelet[2874]: I0115 12:50:37.128490 2874 apiserver.go:52] "Watching apiserver" Jan 15 12:50:37.146198 kubelet[2874]: I0115 12:50:37.146136 2874 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 15 12:50:37.185547 kubelet[2874]: E0115 12:50:37.185214 2874 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-f89ceb891c.181adeaaf8cef8f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-f89ceb891c,UID:ci-4081.3.0-a-f89ceb891c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.0-a-f89ceb891c status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-f89ceb891c,},FirstTimestamp:2025-01-15 12:50:32.204867831 +0000 UTC m=+1.278866638,LastTimestamp:2025-01-15 12:50:32.204867831 +0000 UTC m=+1.278866638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-f89ceb891c,}" Jan 15 12:50:37.253308 kubelet[2874]: E0115 12:50:37.253063 2874 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-f89ceb891c.181adeaaf8cf0cf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-f89ceb891c,UID:ci-4081.3.0-a-f89ceb891c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081.3.0-a-f89ceb891c status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-f89ceb891c,},FirstTimestamp:2025-01-15 12:50:32.204872951 +0000 UTC m=+1.278871758,LastTimestamp:2025-01-15 12:50:32.204872951 +0000 UTC m=+1.278871758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-f89ceb891c,}" Jan 15 12:50:38.186820 kubelet[2874]: W0115 12:50:38.186711 2874 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:39.365080 systemd[1]: Reloading requested from client PID 3149 ('systemctl') (unit session-9.scope)... Jan 15 12:50:39.365098 systemd[1]: Reloading... Jan 15 12:50:39.469842 zram_generator::config[3189]: No configuration found. Jan 15 12:50:39.575397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:50:39.662500 systemd[1]: Reloading finished in 297 ms. Jan 15 12:50:39.696892 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:39.715043 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 12:50:39.715254 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:39.715303 systemd[1]: kubelet.service: Consumed 1.584s CPU time, 113.3M memory peak, 0B memory swap peak. Jan 15 12:50:39.721035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:39.818527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:39.828391 (kubelet)[3253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:50:39.885804 kubelet[3253]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:39.885804 kubelet[3253]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:50:39.885804 kubelet[3253]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:39.886159 kubelet[3253]: I0115 12:50:39.885859 3253 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:50:39.891242 kubelet[3253]: I0115 12:50:39.891204 3253 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 15 12:50:39.891242 kubelet[3253]: I0115 12:50:39.891233 3253 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:50:39.891495 kubelet[3253]: I0115 12:50:39.891463 3253 server.go:927] "Client rotation is on, will bootstrap in background" Jan 15 12:50:39.893483 kubelet[3253]: I0115 12:50:39.893017 3253 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 12:50:39.894789 kubelet[3253]: I0115 12:50:39.894754 3253 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:50:39.915477 kubelet[3253]: I0115 12:50:39.915371 3253 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:50:39.916019 kubelet[3253]: I0115 12:50:39.915819 3253 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:50:39.916068 kubelet[3253]: I0115 12:50:39.915853 3253 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-f89ceb891c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 12:50:39.916068 kubelet[3253]: I0115 12:50:39.916043 3253 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:50:39.916068 kubelet[3253]: I0115 12:50:39.916053 3253 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 12:50:39.916208 kubelet[3253]: I0115 12:50:39.916088 3253 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:39.916689 kubelet[3253]: I0115 12:50:39.916614 3253 kubelet.go:400] "Attempting to sync node with API server" Jan 15 12:50:39.918861 kubelet[3253]: I0115 12:50:39.918833 3253 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:50:39.919041 kubelet[3253]: I0115 12:50:39.919022 3253 kubelet.go:312] "Adding apiserver pod source" Jan 15 12:50:39.919090 kubelet[3253]: I0115 12:50:39.919044 3253 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:50:39.922918 kubelet[3253]: I0115 12:50:39.922886 3253 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:50:39.923093 kubelet[3253]: I0115 12:50:39.923071 3253 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:50:39.923477 kubelet[3253]: I0115 12:50:39.923457 3253 server.go:1264] "Started kubelet" Jan 15 12:50:39.929384 kubelet[3253]: I0115 12:50:39.929337 3253 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:50:39.944733 kubelet[3253]: I0115 12:50:39.939941 3253 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:50:39.944733 kubelet[3253]: I0115 12:50:39.940842 3253 server.go:455] "Adding debug handlers to kubelet server" Jan 15 12:50:39.944733 kubelet[3253]: I0115 12:50:39.941606 3253 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:50:39.945080 kubelet[3253]: I0115 12:50:39.945064 3253 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:50:39.948347 kubelet[3253]: I0115 12:50:39.948034 3253 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 12:50:39.948347 kubelet[3253]: I0115 12:50:39.948212 3253 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 15 12:50:39.948347 kubelet[3253]: I0115 12:50:39.948333 3253 reconciler.go:26] "Reconciler: start to sync state" Jan 15 12:50:39.950358 kubelet[3253]: I0115 12:50:39.950153 3253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:50:39.953759 kubelet[3253]: I0115 12:50:39.952828 3253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:50:39.953759 kubelet[3253]: I0115 12:50:39.952868 3253 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:50:39.953759 kubelet[3253]: I0115 12:50:39.952885 3253 kubelet.go:2337] "Starting kubelet main sync loop" Jan 15 12:50:39.953759 kubelet[3253]: E0115 12:50:39.952923 3253 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:50:39.957709 kubelet[3253]: I0115 12:50:39.957629 3253 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:50:39.959481 kubelet[3253]: I0115 12:50:39.958756 3253 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:50:39.969713 kubelet[3253]: I0115 12:50:39.968965 3253 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:50:39.992998 kubelet[3253]: E0115 12:50:39.992952 3253 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 12:50:40.035760 kubelet[3253]: I0115 12:50:40.035438 3253 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:50:40.035760 kubelet[3253]: I0115 12:50:40.035456 3253 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:50:40.035760 kubelet[3253]: I0115 12:50:40.035478 3253 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:40.035760 kubelet[3253]: I0115 12:50:40.035632 3253 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 12:50:40.035760 kubelet[3253]: I0115 12:50:40.035642 3253 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 12:50:40.035760 kubelet[3253]: I0115 12:50:40.035660 3253 policy_none.go:49] "None policy: Start" Jan 15 12:50:40.036818 kubelet[3253]: I0115 12:50:40.036753 3253 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:50:40.036818 kubelet[3253]: I0115 12:50:40.036778 3253 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:50:40.036970 kubelet[3253]: I0115 12:50:40.036918 3253 state_mem.go:75] "Updated machine memory state" Jan 15 12:50:40.041362 kubelet[3253]: I0115 12:50:40.041325 3253 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:50:40.041844 kubelet[3253]: I0115 12:50:40.041502 3253 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 12:50:40.041844 kubelet[3253]: I0115 12:50:40.041632 3253 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:50:40.048292 kubelet[3253]: I0115 12:50:40.048268 3253 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.054091 kubelet[3253]: I0115 12:50:40.054033 3253 topology_manager.go:215] "Topology Admit Handler" podUID="5f1e702d3783eaea0ea581588038c2fd" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.054195 kubelet[3253]: I0115 12:50:40.054150 3253 topology_manager.go:215] "Topology Admit Handler" podUID="d23128fe1a23cbbba9284158943d9311" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.054195 kubelet[3253]: I0115 12:50:40.054187 3253 topology_manager.go:215] "Topology Admit Handler" podUID="d2dd0df29afca6e016a2e4d4e0b30b54" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.070291 kubelet[3253]: I0115 12:50:40.070213 3253 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.070291 kubelet[3253]: I0115 12:50:40.070295 3253 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.072607 kubelet[3253]: W0115 12:50:40.071603 3253 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:40.072607 kubelet[3253]: E0115 12:50:40.071656 3253 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.072607 kubelet[3253]: W0115 12:50:40.072289 3253 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:40.072607 kubelet[3253]: W0115 12:50:40.072505 3253 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:40.148635 kubelet[3253]: I0115 12:50:40.148598 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d23128fe1a23cbbba9284158943d9311-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-f89ceb891c\" (UID: \"d23128fe1a23cbbba9284158943d9311\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.148635 kubelet[3253]: I0115 12:50:40.148636 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d23128fe1a23cbbba9284158943d9311-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-f89ceb891c\" (UID: \"d23128fe1a23cbbba9284158943d9311\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.148944 kubelet[3253]: I0115 12:50:40.148655 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d23128fe1a23cbbba9284158943d9311-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-f89ceb891c\" (UID: \"d23128fe1a23cbbba9284158943d9311\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.148944 kubelet[3253]: I0115 12:50:40.148676 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.148944 kubelet[3253]: I0115 12:50:40.148707 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.148944 kubelet[3253]: I0115 12:50:40.148744 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f1e702d3783eaea0ea581588038c2fd-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-f89ceb891c\" (UID: \"5f1e702d3783eaea0ea581588038c2fd\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.148944 kubelet[3253]: I0115 12:50:40.148760 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.149076 kubelet[3253]: I0115 12:50:40.148775 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.149076 kubelet[3253]: I0115 12:50:40.148802 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2dd0df29afca6e016a2e4d4e0b30b54-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-f89ceb891c\" (UID: \"d2dd0df29afca6e016a2e4d4e0b30b54\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:40.921456 kubelet[3253]: I0115 12:50:40.921381 3253 apiserver.go:52] "Watching apiserver" Jan 15 12:50:40.948490 kubelet[3253]: I0115 12:50:40.948442 3253 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 15 12:50:41.031161 kubelet[3253]: W0115 12:50:41.031052 3253 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:41.031161 kubelet[3253]: E0115 12:50:41.031118 3253 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-f89ceb891c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" Jan 15 12:50:41.041598 kubelet[3253]: I0115 12:50:41.041378 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-f89ceb891c" podStartSLOduration=3.041362675 podStartE2EDuration="3.041362675s" podCreationTimestamp="2025-01-15 12:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:41.041271595 +0000 UTC m=+1.206754945" watchObservedRunningTime="2025-01-15 12:50:41.041362675 +0000 UTC m=+1.206845985" Jan 15 12:50:41.070110 kubelet[3253]: I0115 12:50:41.069939 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-f89ceb891c" podStartSLOduration=1.069922549 podStartE2EDuration="1.069922549s" podCreationTimestamp="2025-01-15 12:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:41.054628451 +0000 UTC m=+1.220111801" watchObservedRunningTime="2025-01-15 12:50:41.069922549 +0000 UTC m=+1.235405859" Jan 15 12:50:41.090026 kubelet[3253]: I0115 12:50:41.089805 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-f89ceb891c" podStartSLOduration=1.089784813 podStartE2EDuration="1.089784813s" podCreationTimestamp="2025-01-15 12:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:41.07081715 +0000 UTC m=+1.236300500" watchObservedRunningTime="2025-01-15 12:50:41.089784813 +0000 UTC m=+1.255268163" Jan 15 12:50:45.337878 sudo[2323]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:45.420530 sshd[2320]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:45.423610 systemd[1]: sshd@6-10.200.20.39:22-10.200.16.10:60350.service: Deactivated successfully. Jan 15 12:50:45.426146 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 12:50:45.426456 systemd[1]: session-9.scope: Consumed 5.336s CPU time, 187.6M memory peak, 0B memory swap peak. Jan 15 12:50:45.427802 systemd-logind[1665]: Session 9 logged out. Waiting for processes to exit. Jan 15 12:50:45.429180 systemd-logind[1665]: Removed session 9. Jan 15 12:50:53.171980 kubelet[3253]: I0115 12:50:53.171915 3253 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 12:50:53.172406 containerd[1702]: time="2025-01-15T12:50:53.172340516Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 12:50:53.172600 kubelet[3253]: I0115 12:50:53.172515 3253 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 12:50:54.010070 kubelet[3253]: I0115 12:50:54.006431 3253 topology_manager.go:215] "Topology Admit Handler" podUID="0c671189-d3a7-4c64-bf7a-9aca12814b61" podNamespace="kube-system" podName="kube-proxy-cqdb9" Jan 15 12:50:54.022658 systemd[1]: Created slice kubepods-besteffort-pod0c671189_d3a7_4c64_bf7a_9aca12814b61.slice - libcontainer container kubepods-besteffort-pod0c671189_d3a7_4c64_bf7a_9aca12814b61.slice. Jan 15 12:50:54.032274 kubelet[3253]: I0115 12:50:54.031668 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c671189-d3a7-4c64-bf7a-9aca12814b61-kube-proxy\") pod \"kube-proxy-cqdb9\" (UID: \"0c671189-d3a7-4c64-bf7a-9aca12814b61\") " pod="kube-system/kube-proxy-cqdb9" Jan 15 12:50:54.032461 kubelet[3253]: I0115 12:50:54.032440 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwb8\" (UniqueName: \"kubernetes.io/projected/0c671189-d3a7-4c64-bf7a-9aca12814b61-kube-api-access-dpwb8\") pod \"kube-proxy-cqdb9\" (UID: \"0c671189-d3a7-4c64-bf7a-9aca12814b61\") " pod="kube-system/kube-proxy-cqdb9" Jan 15 12:50:54.032594 kubelet[3253]: I0115 12:50:54.032579 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c671189-d3a7-4c64-bf7a-9aca12814b61-lib-modules\") pod \"kube-proxy-cqdb9\" (UID: \"0c671189-d3a7-4c64-bf7a-9aca12814b61\") " pod="kube-system/kube-proxy-cqdb9" Jan 15 12:50:54.032668 kubelet[3253]: I0115 12:50:54.032657 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c671189-d3a7-4c64-bf7a-9aca12814b61-xtables-lock\") pod \"kube-proxy-cqdb9\" (UID: \"0c671189-d3a7-4c64-bf7a-9aca12814b61\") " pod="kube-system/kube-proxy-cqdb9" Jan 15 12:50:54.292599 kubelet[3253]: I0115 12:50:54.292470 3253 topology_manager.go:215] "Topology Admit Handler" podUID="41d6ba1b-cfdc-460b-aeb0-bf78dec53c00" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-7t89h" Jan 15 12:50:54.302152 systemd[1]: Created slice kubepods-besteffort-pod41d6ba1b_cfdc_460b_aeb0_bf78dec53c00.slice - libcontainer container kubepods-besteffort-pod41d6ba1b_cfdc_460b_aeb0_bf78dec53c00.slice. Jan 15 12:50:54.334895 kubelet[3253]: I0115 12:50:54.334863 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/41d6ba1b-cfdc-460b-aeb0-bf78dec53c00-var-lib-calico\") pod \"tigera-operator-7bc55997bb-7t89h\" (UID: \"41d6ba1b-cfdc-460b-aeb0-bf78dec53c00\") " pod="tigera-operator/tigera-operator-7bc55997bb-7t89h" Jan 15 12:50:54.335157 kubelet[3253]: I0115 12:50:54.335065 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsj7\" (UniqueName: \"kubernetes.io/projected/41d6ba1b-cfdc-460b-aeb0-bf78dec53c00-kube-api-access-dmsj7\") pod \"tigera-operator-7bc55997bb-7t89h\" (UID: \"41d6ba1b-cfdc-460b-aeb0-bf78dec53c00\") " pod="tigera-operator/tigera-operator-7bc55997bb-7t89h" Jan 15 12:50:54.335460 containerd[1702]: time="2025-01-15T12:50:54.335424375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqdb9,Uid:0c671189-d3a7-4c64-bf7a-9aca12814b61,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:54.372833 containerd[1702]: time="2025-01-15T12:50:54.372632863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:54.372833 containerd[1702]: time="2025-01-15T12:50:54.372781023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:54.372833 containerd[1702]: time="2025-01-15T12:50:54.372811903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:54.373213 containerd[1702]: time="2025-01-15T12:50:54.372898863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:54.394938 systemd[1]: Started cri-containerd-241584a3587dabb886ae31da9af032ccde51f01acf62f3b5987dc367f201908c.scope - libcontainer container 241584a3587dabb886ae31da9af032ccde51f01acf62f3b5987dc367f201908c. Jan 15 12:50:54.414180 containerd[1702]: time="2025-01-15T12:50:54.414055596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqdb9,Uid:0c671189-d3a7-4c64-bf7a-9aca12814b61,Namespace:kube-system,Attempt:0,} returns sandbox id \"241584a3587dabb886ae31da9af032ccde51f01acf62f3b5987dc367f201908c\"" Jan 15 12:50:54.418812 containerd[1702]: time="2025-01-15T12:50:54.418654882Z" level=info msg="CreateContainer within sandbox \"241584a3587dabb886ae31da9af032ccde51f01acf62f3b5987dc367f201908c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 12:50:54.456776 containerd[1702]: time="2025-01-15T12:50:54.456705611Z" level=info msg="CreateContainer within sandbox \"241584a3587dabb886ae31da9af032ccde51f01acf62f3b5987dc367f201908c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fce7448e0aff8e91a603c1e7186b666471c2d5ebab5d99a7e98d21099a42af22\"" Jan 15 12:50:54.458573 containerd[1702]: time="2025-01-15T12:50:54.458522253Z" level=info msg="StartContainer for \"fce7448e0aff8e91a603c1e7186b666471c2d5ebab5d99a7e98d21099a42af22\"" Jan 15 12:50:54.481919 systemd[1]: Started cri-containerd-fce7448e0aff8e91a603c1e7186b666471c2d5ebab5d99a7e98d21099a42af22.scope - libcontainer container fce7448e0aff8e91a603c1e7186b666471c2d5ebab5d99a7e98d21099a42af22. Jan 15 12:50:54.508657 containerd[1702]: time="2025-01-15T12:50:54.508461358Z" level=info msg="StartContainer for \"fce7448e0aff8e91a603c1e7186b666471c2d5ebab5d99a7e98d21099a42af22\" returns successfully" Jan 15 12:50:54.605382 containerd[1702]: time="2025-01-15T12:50:54.605247203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7t89h,Uid:41d6ba1b-cfdc-460b-aeb0-bf78dec53c00,Namespace:tigera-operator,Attempt:0,}" Jan 15 12:50:54.649831 containerd[1702]: time="2025-01-15T12:50:54.649655220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:54.649831 containerd[1702]: time="2025-01-15T12:50:54.649706820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:54.650120 containerd[1702]: time="2025-01-15T12:50:54.649801980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:54.650120 containerd[1702]: time="2025-01-15T12:50:54.649905580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:54.665915 systemd[1]: Started cri-containerd-02933590c88701522a2050b875aa10ea6e59bb977f5a74f0fe7a7e7d152e2a21.scope - libcontainer container 02933590c88701522a2050b875aa10ea6e59bb977f5a74f0fe7a7e7d152e2a21. Jan 15 12:50:54.699012 containerd[1702]: time="2025-01-15T12:50:54.698407403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7t89h,Uid:41d6ba1b-cfdc-460b-aeb0-bf78dec53c00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"02933590c88701522a2050b875aa10ea6e59bb977f5a74f0fe7a7e7d152e2a21\"" Jan 15 12:50:54.703334 containerd[1702]: time="2025-01-15T12:50:54.703300329Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 15 12:50:55.057659 kubelet[3253]: I0115 12:50:55.057593 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqdb9" podStartSLOduration=2.057574466 podStartE2EDuration="2.057574466s" podCreationTimestamp="2025-01-15 12:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:55.057077065 +0000 UTC m=+15.222560415" watchObservedRunningTime="2025-01-15 12:50:55.057574466 +0000 UTC m=+15.223057816" Jan 15 12:50:57.075794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287324053.mount: Deactivated successfully. Jan 15 12:50:57.449211 containerd[1702]: time="2025-01-15T12:50:57.449142468Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:57.451468 containerd[1702]: time="2025-01-15T12:50:57.451430151Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19126000" Jan 15 12:50:57.456414 containerd[1702]: time="2025-01-15T12:50:57.456353198Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:57.460375 containerd[1702]: time="2025-01-15T12:50:57.460300163Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:57.461549 containerd[1702]: time="2025-01-15T12:50:57.460948804Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.757398275s" Jan 15 12:50:57.461549 containerd[1702]: time="2025-01-15T12:50:57.460984644Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 15 12:50:57.462969 containerd[1702]: time="2025-01-15T12:50:57.462936126Z" level=info msg="CreateContainer within sandbox \"02933590c88701522a2050b875aa10ea6e59bb977f5a74f0fe7a7e7d152e2a21\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 12:50:57.497540 containerd[1702]: time="2025-01-15T12:50:57.497489371Z" level=info msg="CreateContainer within sandbox \"02933590c88701522a2050b875aa10ea6e59bb977f5a74f0fe7a7e7d152e2a21\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"036f1d6b1cdcc0d5c1343bc7e49ae4e783ff66a3716d92a2875baf8c0724ef21\"" Jan 15 12:50:57.498908 containerd[1702]: time="2025-01-15T12:50:57.498091771Z" level=info msg="StartContainer for \"036f1d6b1cdcc0d5c1343bc7e49ae4e783ff66a3716d92a2875baf8c0724ef21\"" Jan 15 12:50:57.529910 systemd[1]: Started cri-containerd-036f1d6b1cdcc0d5c1343bc7e49ae4e783ff66a3716d92a2875baf8c0724ef21.scope - libcontainer container 036f1d6b1cdcc0d5c1343bc7e49ae4e783ff66a3716d92a2875baf8c0724ef21. Jan 15 12:50:57.555823 containerd[1702]: time="2025-01-15T12:50:57.555774886Z" level=info msg="StartContainer for \"036f1d6b1cdcc0d5c1343bc7e49ae4e783ff66a3716d92a2875baf8c0724ef21\" returns successfully" Jan 15 12:51:02.010182 kubelet[3253]: I0115 12:51:02.009116 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-7t89h" podStartSLOduration=5.249307185 podStartE2EDuration="8.009096303s" podCreationTimestamp="2025-01-15 12:50:54 +0000 UTC" firstStartedPulling="2025-01-15 12:50:54.701880967 +0000 UTC m=+14.867364277" lastFinishedPulling="2025-01-15 12:50:57.461670045 +0000 UTC m=+17.627153395" observedRunningTime="2025-01-15 12:50:58.070166989 +0000 UTC m=+18.235650339" watchObservedRunningTime="2025-01-15 12:51:02.009096303 +0000 UTC m=+22.174579653" Jan 15 12:51:02.010182 kubelet[3253]: I0115 12:51:02.009286 3253 topology_manager.go:215] "Topology Admit Handler" podUID="41e34d96-bb95-474c-8631-181455abad2b" podNamespace="calico-system" podName="calico-typha-6c45775887-j2kmd" Jan 15 12:51:02.017813 systemd[1]: Created slice kubepods-besteffort-pod41e34d96_bb95_474c_8631_181455abad2b.slice - libcontainer container kubepods-besteffort-pod41e34d96_bb95_474c_8631_181455abad2b.slice. Jan 15 12:51:02.084337 kubelet[3253]: I0115 12:51:02.084251 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e34d96-bb95-474c-8631-181455abad2b-tigera-ca-bundle\") pod \"calico-typha-6c45775887-j2kmd\" (UID: \"41e34d96-bb95-474c-8631-181455abad2b\") " pod="calico-system/calico-typha-6c45775887-j2kmd" Jan 15 12:51:02.084337 kubelet[3253]: I0115 12:51:02.084294 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41e34d96-bb95-474c-8631-181455abad2b-typha-certs\") pod \"calico-typha-6c45775887-j2kmd\" (UID: \"41e34d96-bb95-474c-8631-181455abad2b\") " pod="calico-system/calico-typha-6c45775887-j2kmd" Jan 15 12:51:02.084337 kubelet[3253]: I0115 12:51:02.084315 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drc7m\" (UniqueName: \"kubernetes.io/projected/41e34d96-bb95-474c-8631-181455abad2b-kube-api-access-drc7m\") pod \"calico-typha-6c45775887-j2kmd\" (UID: \"41e34d96-bb95-474c-8631-181455abad2b\") " pod="calico-system/calico-typha-6c45775887-j2kmd" Jan 15 12:51:02.157841 kubelet[3253]: I0115 12:51:02.157777 3253 topology_manager.go:215] "Topology Admit Handler" podUID="05f77726-51ca-4e95-aa49-061cb945a304" podNamespace="calico-system" podName="calico-node-vggn8" Jan 15 12:51:02.166204 systemd[1]: Created slice kubepods-besteffort-pod05f77726_51ca_4e95_aa49_061cb945a304.slice - libcontainer container kubepods-besteffort-pod05f77726_51ca_4e95_aa49_061cb945a304.slice. Jan 15 12:51:02.185209 kubelet[3253]: I0115 12:51:02.184787 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qs6\" (UniqueName: \"kubernetes.io/projected/05f77726-51ca-4e95-aa49-061cb945a304-kube-api-access-w4qs6\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185209 kubelet[3253]: I0115 12:51:02.184835 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-cni-log-dir\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185209 kubelet[3253]: I0115 12:51:02.184857 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-var-run-calico\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185372 kubelet[3253]: I0115 12:51:02.185309 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-flexvol-driver-host\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185372 kubelet[3253]: I0115 12:51:02.185341 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-policysync\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185423 kubelet[3253]: I0115 12:51:02.185386 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f77726-51ca-4e95-aa49-061cb945a304-tigera-ca-bundle\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185423 kubelet[3253]: I0115 12:51:02.185408 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-xtables-lock\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185467 kubelet[3253]: I0115 12:51:02.185422 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-cni-bin-dir\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185467 kubelet[3253]: I0115 12:51:02.185449 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05f77726-51ca-4e95-aa49-061cb945a304-node-certs\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185506 kubelet[3253]: I0115 12:51:02.185466 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-var-lib-calico\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185506 kubelet[3253]: I0115 12:51:02.185482 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-cni-net-dir\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.185557 kubelet[3253]: I0115 12:51:02.185530 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f77726-51ca-4e95-aa49-061cb945a304-lib-modules\") pod \"calico-node-vggn8\" (UID: \"05f77726-51ca-4e95-aa49-061cb945a304\") " pod="calico-system/calico-node-vggn8" Jan 15 12:51:02.288865 kubelet[3253]: E0115 12:51:02.287848 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.288865 kubelet[3253]: W0115 12:51:02.287875 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.288865 kubelet[3253]: E0115 12:51:02.287896 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.288865 kubelet[3253]: E0115 12:51:02.288072 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.288865 kubelet[3253]: W0115 12:51:02.288080 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.288865 kubelet[3253]: E0115 12:51:02.288089 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.288865 kubelet[3253]: E0115 12:51:02.288505 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.288865 kubelet[3253]: W0115 12:51:02.288517 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.288865 kubelet[3253]: E0115 12:51:02.288530 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.288865 kubelet[3253]: E0115 12:51:02.288773 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.289155 kubelet[3253]: W0115 12:51:02.288784 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.289155 kubelet[3253]: E0115 12:51:02.288794 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.293256 kubelet[3253]: E0115 12:51:02.293201 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.293256 kubelet[3253]: W0115 12:51:02.293216 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.293256 kubelet[3253]: E0115 12:51:02.293228 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.312618 kubelet[3253]: E0115 12:51:02.312539 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.312618 kubelet[3253]: W0115 12:51:02.312559 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.312618 kubelet[3253]: E0115 12:51:02.312578 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.322888 containerd[1702]: time="2025-01-15T12:51:02.322809522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c45775887-j2kmd,Uid:41e34d96-bb95-474c-8631-181455abad2b,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:02.369698 containerd[1702]: time="2025-01-15T12:51:02.369032137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:02.369698 containerd[1702]: time="2025-01-15T12:51:02.369605458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:02.371230 containerd[1702]: time="2025-01-15T12:51:02.370807420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:02.371230 containerd[1702]: time="2025-01-15T12:51:02.371125140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:02.376258 kubelet[3253]: I0115 12:51:02.375564 3253 topology_manager.go:215] "Topology Admit Handler" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" podNamespace="calico-system" podName="csi-node-driver-kw7xv" Jan 15 12:51:02.378121 kubelet[3253]: E0115 12:51:02.377922 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:02.384838 kubelet[3253]: E0115 12:51:02.384382 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.384838 kubelet[3253]: W0115 12:51:02.384405 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.384838 kubelet[3253]: E0115 12:51:02.384535 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.384987 kubelet[3253]: E0115 12:51:02.384887 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.384987 kubelet[3253]: W0115 12:51:02.384908 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.384987 kubelet[3253]: E0115 12:51:02.384921 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.386613 kubelet[3253]: E0115 12:51:02.385663 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.386613 kubelet[3253]: W0115 12:51:02.385680 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.386613 kubelet[3253]: E0115 12:51:02.385692 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.386613 kubelet[3253]: E0115 12:51:02.386092 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.386613 kubelet[3253]: W0115 12:51:02.386104 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.386613 kubelet[3253]: E0115 12:51:02.386116 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.388575 kubelet[3253]: E0115 12:51:02.387201 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.388575 kubelet[3253]: W0115 12:51:02.387214 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.388575 kubelet[3253]: E0115 12:51:02.387226 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.388575 kubelet[3253]: E0115 12:51:02.387982 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.388575 kubelet[3253]: W0115 12:51:02.387996 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.388575 kubelet[3253]: E0115 12:51:02.388009 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.388575 kubelet[3253]: E0115 12:51:02.388172 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.388575 kubelet[3253]: W0115 12:51:02.388180 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.388575 kubelet[3253]: E0115 12:51:02.388189 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.390753 kubelet[3253]: E0115 12:51:02.389903 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.390753 kubelet[3253]: W0115 12:51:02.389922 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.390753 kubelet[3253]: E0115 12:51:02.390087 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.390753 kubelet[3253]: E0115 12:51:02.390309 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.390753 kubelet[3253]: W0115 12:51:02.390319 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.390753 kubelet[3253]: E0115 12:51:02.390330 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.391680 kubelet[3253]: E0115 12:51:02.391655 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.391791 kubelet[3253]: W0115 12:51:02.391674 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.391791 kubelet[3253]: E0115 12:51:02.391712 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.392696 kubelet[3253]: E0115 12:51:02.392430 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.392696 kubelet[3253]: W0115 12:51:02.392445 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.392696 kubelet[3253]: E0115 12:51:02.392457 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.393224 kubelet[3253]: E0115 12:51:02.392735 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.393224 kubelet[3253]: W0115 12:51:02.392746 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.393224 kubelet[3253]: E0115 12:51:02.392757 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.393224 kubelet[3253]: E0115 12:51:02.393142 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.393224 kubelet[3253]: W0115 12:51:02.393155 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.393224 kubelet[3253]: E0115 12:51:02.393167 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.393224 kubelet[3253]: I0115 12:51:02.393189 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b15694bf-ebfd-4bd2-8276-f46e85d79323-kubelet-dir\") pod \"csi-node-driver-kw7xv\" (UID: \"b15694bf-ebfd-4bd2-8276-f46e85d79323\") " pod="calico-system/csi-node-driver-kw7xv" Jan 15 12:51:02.394053 kubelet[3253]: E0115 12:51:02.393992 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.394053 kubelet[3253]: W0115 12:51:02.394013 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.394053 kubelet[3253]: E0115 12:51:02.394025 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.394053 kubelet[3253]: I0115 12:51:02.394042 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b15694bf-ebfd-4bd2-8276-f46e85d79323-varrun\") pod \"csi-node-driver-kw7xv\" (UID: \"b15694bf-ebfd-4bd2-8276-f46e85d79323\") " pod="calico-system/csi-node-driver-kw7xv" Jan 15 12:51:02.395838 kubelet[3253]: E0115 12:51:02.395813 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.395838 kubelet[3253]: W0115 12:51:02.395832 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.395931 kubelet[3253]: E0115 12:51:02.395853 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.395931 kubelet[3253]: I0115 12:51:02.395871 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b15694bf-ebfd-4bd2-8276-f46e85d79323-socket-dir\") pod \"csi-node-driver-kw7xv\" (UID: \"b15694bf-ebfd-4bd2-8276-f46e85d79323\") " pod="calico-system/csi-node-driver-kw7xv" Jan 15 12:51:02.396472 kubelet[3253]: E0115 12:51:02.396120 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.396472 kubelet[3253]: W0115 12:51:02.396131 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.396472 kubelet[3253]: E0115 12:51:02.396266 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.396472 kubelet[3253]: W0115 12:51:02.396274 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.396472 kubelet[3253]: E0115 12:51:02.396403 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.396472 kubelet[3253]: W0115 12:51:02.396410 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.396602 kubelet[3253]: E0115 12:51:02.396535 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.396602 kubelet[3253]: W0115 12:51:02.396541 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.396602 kubelet[3253]: E0115 12:51:02.396553 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.397665 kubelet[3253]: E0115 12:51:02.396664 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.397665 kubelet[3253]: W0115 12:51:02.396679 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.397665 kubelet[3253]: E0115 12:51:02.396687 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.397665 kubelet[3253]: E0115 12:51:02.396706 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.397665 kubelet[3253]: E0115 12:51:02.396867 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.397665 kubelet[3253]: W0115 12:51:02.396874 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.397665 kubelet[3253]: E0115 12:51:02.396882 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.397665 kubelet[3253]: E0115 12:51:02.396894 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.397665 kubelet[3253]: E0115 12:51:02.397544 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.398779 kubelet[3253]: E0115 12:51:02.398081 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.398779 kubelet[3253]: W0115 12:51:02.398098 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.398779 kubelet[3253]: E0115 12:51:02.398117 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.398779 kubelet[3253]: E0115 12:51:02.398257 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.398779 kubelet[3253]: W0115 12:51:02.398264 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.398779 kubelet[3253]: E0115 12:51:02.398272 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.398779 kubelet[3253]: E0115 12:51:02.398390 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.398779 kubelet[3253]: W0115 12:51:02.398396 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.398779 kubelet[3253]: E0115 12:51:02.398404 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.398779 kubelet[3253]: E0115 12:51:02.398534 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.399024 kubelet[3253]: W0115 12:51:02.398542 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.399024 kubelet[3253]: E0115 12:51:02.398549 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.399024 kubelet[3253]: E0115 12:51:02.398654 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.399024 kubelet[3253]: W0115 12:51:02.398660 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.399024 kubelet[3253]: E0115 12:51:02.398666 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.399024 kubelet[3253]: E0115 12:51:02.398809 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.399024 kubelet[3253]: W0115 12:51:02.398816 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.399024 kubelet[3253]: E0115 12:51:02.398824 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.399024 kubelet[3253]: E0115 12:51:02.398939 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.399024 kubelet[3253]: W0115 12:51:02.398946 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.399213 kubelet[3253]: E0115 12:51:02.398953 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.399213 kubelet[3253]: E0115 12:51:02.399072 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.399213 kubelet[3253]: W0115 12:51:02.399079 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.399213 kubelet[3253]: E0115 12:51:02.399088 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.403866 systemd[1]: Started cri-containerd-00148a7c000e90138701f27970065634fd2e4996113ca7da08d0404072b3b0ec.scope - libcontainer container 00148a7c000e90138701f27970065634fd2e4996113ca7da08d0404072b3b0ec. Jan 15 12:51:02.440575 containerd[1702]: time="2025-01-15T12:51:02.440465424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c45775887-j2kmd,Uid:41e34d96-bb95-474c-8631-181455abad2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"00148a7c000e90138701f27970065634fd2e4996113ca7da08d0404072b3b0ec\"" Jan 15 12:51:02.445478 containerd[1702]: time="2025-01-15T12:51:02.443967228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 15 12:51:02.472237 containerd[1702]: time="2025-01-15T12:51:02.472094182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vggn8,Uid:05f77726-51ca-4e95-aa49-061cb945a304,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:02.497754 kubelet[3253]: E0115 12:51:02.497677 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.497888 kubelet[3253]: W0115 12:51:02.497874 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.497916 kubelet[3253]: E0115 12:51:02.497899 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.498145 kubelet[3253]: I0115 12:51:02.497931 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b15694bf-ebfd-4bd2-8276-f46e85d79323-registration-dir\") pod \"csi-node-driver-kw7xv\" (UID: \"b15694bf-ebfd-4bd2-8276-f46e85d79323\") " pod="calico-system/csi-node-driver-kw7xv" Jan 15 12:51:02.498848 kubelet[3253]: E0115 12:51:02.498826 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.498979 kubelet[3253]: W0115 12:51:02.498908 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.498979 kubelet[3253]: E0115 12:51:02.498954 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.499552 kubelet[3253]: E0115 12:51:02.499410 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.499552 kubelet[3253]: W0115 12:51:02.499424 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.499552 kubelet[3253]: E0115 12:51:02.499444 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.499858 kubelet[3253]: E0115 12:51:02.499830 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.499858 kubelet[3253]: W0115 12:51:02.499851 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.499954 kubelet[3253]: E0115 12:51:02.499865 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.500101 kubelet[3253]: E0115 12:51:02.500086 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.500101 kubelet[3253]: W0115 12:51:02.500094 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.500331 kubelet[3253]: E0115 12:51:02.500104 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.500460 kubelet[3253]: E0115 12:51:02.500405 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.500460 kubelet[3253]: W0115 12:51:02.500421 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.500460 kubelet[3253]: E0115 12:51:02.500437 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.500755 kubelet[3253]: E0115 12:51:02.500682 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.500755 kubelet[3253]: W0115 12:51:02.500698 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.500755 kubelet[3253]: E0115 12:51:02.500749 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.500847 kubelet[3253]: I0115 12:51:02.500773 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjx72\" (UniqueName: \"kubernetes.io/projected/b15694bf-ebfd-4bd2-8276-f46e85d79323-kube-api-access-zjx72\") pod \"csi-node-driver-kw7xv\" (UID: \"b15694bf-ebfd-4bd2-8276-f46e85d79323\") " pod="calico-system/csi-node-driver-kw7xv" Jan 15 12:51:02.501125 kubelet[3253]: E0115 12:51:02.501015 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.501125 kubelet[3253]: W0115 12:51:02.501031 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.501125 kubelet[3253]: E0115 12:51:02.501052 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.501330 kubelet[3253]: E0115 12:51:02.501263 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.501330 kubelet[3253]: W0115 12:51:02.501272 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.501330 kubelet[3253]: E0115 12:51:02.501288 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.501525 kubelet[3253]: E0115 12:51:02.501500 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.501525 kubelet[3253]: W0115 12:51:02.501515 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.501712 kubelet[3253]: E0115 12:51:02.501534 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.501804 kubelet[3253]: E0115 12:51:02.501756 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.501804 kubelet[3253]: W0115 12:51:02.501765 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.501804 kubelet[3253]: E0115 12:51:02.501799 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.502078 kubelet[3253]: E0115 12:51:02.502060 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.502078 kubelet[3253]: W0115 12:51:02.502075 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.502163 kubelet[3253]: E0115 12:51:02.502084 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.502334 kubelet[3253]: E0115 12:51:02.502312 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.502334 kubelet[3253]: W0115 12:51:02.502329 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.502418 kubelet[3253]: E0115 12:51:02.502363 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.502589 kubelet[3253]: E0115 12:51:02.502570 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.502635 kubelet[3253]: W0115 12:51:02.502606 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.502635 kubelet[3253]: E0115 12:51:02.502627 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.502955 kubelet[3253]: E0115 12:51:02.502936 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.502955 kubelet[3253]: W0115 12:51:02.502952 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.503065 kubelet[3253]: E0115 12:51:02.502979 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.503237 kubelet[3253]: E0115 12:51:02.503215 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.503237 kubelet[3253]: W0115 12:51:02.503232 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.503329 kubelet[3253]: E0115 12:51:02.503251 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.503493 kubelet[3253]: E0115 12:51:02.503475 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.503493 kubelet[3253]: W0115 12:51:02.503491 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.503681 kubelet[3253]: E0115 12:51:02.503588 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.503829 kubelet[3253]: E0115 12:51:02.503801 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.503829 kubelet[3253]: W0115 12:51:02.503818 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.504512 kubelet[3253]: E0115 12:51:02.503830 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.505675 kubelet[3253]: E0115 12:51:02.505643 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.505675 kubelet[3253]: W0115 12:51:02.505662 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.505860 kubelet[3253]: E0115 12:51:02.505681 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.506276 kubelet[3253]: E0115 12:51:02.506247 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.506276 kubelet[3253]: W0115 12:51:02.506265 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.506276 kubelet[3253]: E0115 12:51:02.506287 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.507073 kubelet[3253]: E0115 12:51:02.507044 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.507073 kubelet[3253]: W0115 12:51:02.507064 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.507191 kubelet[3253]: E0115 12:51:02.507080 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.522058 containerd[1702]: time="2025-01-15T12:51:02.521859962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:02.522058 containerd[1702]: time="2025-01-15T12:51:02.521932282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:02.522058 containerd[1702]: time="2025-01-15T12:51:02.521944922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:02.522490 containerd[1702]: time="2025-01-15T12:51:02.522399403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:02.542990 systemd[1]: Started cri-containerd-02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d.scope - libcontainer container 02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d. Jan 15 12:51:02.571406 containerd[1702]: time="2025-01-15T12:51:02.571367702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vggn8,Uid:05f77726-51ca-4e95-aa49-061cb945a304,Namespace:calico-system,Attempt:0,} returns sandbox id \"02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d\"" Jan 15 12:51:02.601247 kubelet[3253]: E0115 12:51:02.601182 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.601247 kubelet[3253]: W0115 12:51:02.601203 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.601247 kubelet[3253]: E0115 12:51:02.601222 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.601495 kubelet[3253]: E0115 12:51:02.601405 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.601495 kubelet[3253]: W0115 12:51:02.601413 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.601495 kubelet[3253]: E0115 12:51:02.601422 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.601613 kubelet[3253]: E0115 12:51:02.601579 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.601613 kubelet[3253]: W0115 12:51:02.601587 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.601613 kubelet[3253]: E0115 12:51:02.601603 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.601883 kubelet[3253]: E0115 12:51:02.601792 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.601883 kubelet[3253]: W0115 12:51:02.601801 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.601883 kubelet[3253]: E0115 12:51:02.601816 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.602155 kubelet[3253]: E0115 12:51:02.601969 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.602155 kubelet[3253]: W0115 12:51:02.601978 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.602155 kubelet[3253]: E0115 12:51:02.601995 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.602327 kubelet[3253]: E0115 12:51:02.602294 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.602456 kubelet[3253]: W0115 12:51:02.602314 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.602456 kubelet[3253]: E0115 12:51:02.602405 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.602708 kubelet[3253]: E0115 12:51:02.602695 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.602854 kubelet[3253]: W0115 12:51:02.602782 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.602854 kubelet[3253]: E0115 12:51:02.602809 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.603228 kubelet[3253]: E0115 12:51:02.603095 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.603228 kubelet[3253]: W0115 12:51:02.603108 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.603228 kubelet[3253]: E0115 12:51:02.603127 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.603415 kubelet[3253]: E0115 12:51:02.603404 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.603548 kubelet[3253]: W0115 12:51:02.603452 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.603548 kubelet[3253]: E0115 12:51:02.603482 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.603877 kubelet[3253]: E0115 12:51:02.603840 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.604050 kubelet[3253]: W0115 12:51:02.603957 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.604050 kubelet[3253]: E0115 12:51:02.603977 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:02.614293 kubelet[3253]: E0115 12:51:02.614220 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:02.614293 kubelet[3253]: W0115 12:51:02.614241 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:02.614293 kubelet[3253]: E0115 12:51:02.614259 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:03.876481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854726794.mount: Deactivated successfully. Jan 15 12:51:03.955549 kubelet[3253]: E0115 12:51:03.954921 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:04.622707 containerd[1702]: time="2025-01-15T12:51:04.622508740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:04.625685 containerd[1702]: time="2025-01-15T12:51:04.625642104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 15 12:51:04.630300 containerd[1702]: time="2025-01-15T12:51:04.630220989Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:04.635913 containerd[1702]: time="2025-01-15T12:51:04.635845756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:04.636906 containerd[1702]: time="2025-01-15T12:51:04.636525357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.192516329s" Jan 15 12:51:04.636906 containerd[1702]: time="2025-01-15T12:51:04.636562157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 15 12:51:04.637947 containerd[1702]: time="2025-01-15T12:51:04.637821959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 15 12:51:04.650276 containerd[1702]: time="2025-01-15T12:51:04.649182732Z" level=info msg="CreateContainer within sandbox \"00148a7c000e90138701f27970065634fd2e4996113ca7da08d0404072b3b0ec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 12:51:04.681474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103612720.mount: Deactivated successfully. Jan 15 12:51:04.691667 containerd[1702]: time="2025-01-15T12:51:04.691615504Z" level=info msg="CreateContainer within sandbox \"00148a7c000e90138701f27970065634fd2e4996113ca7da08d0404072b3b0ec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6bf03ba98463f16f55a0ed4eb7417ae3058bd44d6c6169d77d6c4a607be2bd5e\"" Jan 15 12:51:04.692748 containerd[1702]: time="2025-01-15T12:51:04.692338784Z" level=info msg="StartContainer for \"6bf03ba98463f16f55a0ed4eb7417ae3058bd44d6c6169d77d6c4a607be2bd5e\"" Jan 15 12:51:04.725195 systemd[1]: Started cri-containerd-6bf03ba98463f16f55a0ed4eb7417ae3058bd44d6c6169d77d6c4a607be2bd5e.scope - libcontainer container 6bf03ba98463f16f55a0ed4eb7417ae3058bd44d6c6169d77d6c4a607be2bd5e. Jan 15 12:51:04.759926 containerd[1702]: time="2025-01-15T12:51:04.759869986Z" level=info msg="StartContainer for \"6bf03ba98463f16f55a0ed4eb7417ae3058bd44d6c6169d77d6c4a607be2bd5e\" returns successfully" Jan 15 12:51:05.118332 kubelet[3253]: E0115 12:51:05.118285 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.118332 kubelet[3253]: W0115 12:51:05.118312 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.118332 kubelet[3253]: E0115 12:51:05.118334 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.119450 kubelet[3253]: E0115 12:51:05.118495 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.119450 kubelet[3253]: W0115 12:51:05.118502 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.119450 kubelet[3253]: E0115 12:51:05.118511 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.119450 kubelet[3253]: E0115 12:51:05.118647 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.119450 kubelet[3253]: W0115 12:51:05.118654 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.119450 kubelet[3253]: E0115 12:51:05.118662 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.119450 kubelet[3253]: E0115 12:51:05.118829 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.119450 kubelet[3253]: W0115 12:51:05.118837 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.119450 kubelet[3253]: E0115 12:51:05.118845 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.119450 kubelet[3253]: E0115 12:51:05.118993 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.119683 kubelet[3253]: W0115 12:51:05.119001 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.119683 kubelet[3253]: E0115 12:51:05.119009 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.119683 kubelet[3253]: E0115 12:51:05.119145 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.119683 kubelet[3253]: W0115 12:51:05.119153 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.119683 kubelet[3253]: E0115 12:51:05.119161 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.119683 kubelet[3253]: E0115 12:51:05.119304 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.119683 kubelet[3253]: W0115 12:51:05.119312 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.119683 kubelet[3253]: E0115 12:51:05.119320 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.120474 kubelet[3253]: E0115 12:51:05.120030 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.120474 kubelet[3253]: W0115 12:51:05.120046 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.120474 kubelet[3253]: E0115 12:51:05.120056 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.120962 kubelet[3253]: E0115 12:51:05.120835 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.120962 kubelet[3253]: W0115 12:51:05.120860 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.120962 kubelet[3253]: E0115 12:51:05.120872 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.121293 kubelet[3253]: E0115 12:51:05.121156 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.121293 kubelet[3253]: W0115 12:51:05.121168 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.121293 kubelet[3253]: E0115 12:51:05.121180 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.121549 kubelet[3253]: E0115 12:51:05.121480 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.121549 kubelet[3253]: W0115 12:51:05.121491 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.121549 kubelet[3253]: E0115 12:51:05.121501 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.122052 kubelet[3253]: E0115 12:51:05.121918 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.122052 kubelet[3253]: W0115 12:51:05.121932 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.122052 kubelet[3253]: E0115 12:51:05.121942 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.122313 kubelet[3253]: E0115 12:51:05.122239 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.122313 kubelet[3253]: W0115 12:51:05.122251 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.122313 kubelet[3253]: E0115 12:51:05.122261 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.122787 kubelet[3253]: E0115 12:51:05.122614 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.122787 kubelet[3253]: W0115 12:51:05.122637 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.122787 kubelet[3253]: E0115 12:51:05.122648 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.123160 kubelet[3253]: E0115 12:51:05.123053 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.123160 kubelet[3253]: W0115 12:51:05.123067 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.123160 kubelet[3253]: E0115 12:51:05.123078 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.218151 kubelet[3253]: E0115 12:51:05.218122 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.218496 kubelet[3253]: W0115 12:51:05.218339 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.218496 kubelet[3253]: E0115 12:51:05.218370 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.218648 kubelet[3253]: E0115 12:51:05.218634 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.218752 kubelet[3253]: W0115 12:51:05.218698 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.218949 kubelet[3253]: E0115 12:51:05.218933 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.219575 kubelet[3253]: E0115 12:51:05.219451 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.219575 kubelet[3253]: W0115 12:51:05.219467 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.219575 kubelet[3253]: E0115 12:51:05.219483 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.219782 kubelet[3253]: E0115 12:51:05.219767 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.219933 kubelet[3253]: W0115 12:51:05.219836 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.219933 kubelet[3253]: E0115 12:51:05.219852 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.220175 kubelet[3253]: E0115 12:51:05.220059 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.220175 kubelet[3253]: W0115 12:51:05.220072 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.220175 kubelet[3253]: E0115 12:51:05.220083 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.220605 kubelet[3253]: E0115 12:51:05.220381 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.220605 kubelet[3253]: W0115 12:51:05.220395 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.220605 kubelet[3253]: E0115 12:51:05.220416 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.220756 kubelet[3253]: E0115 12:51:05.220642 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.220756 kubelet[3253]: W0115 12:51:05.220656 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.220756 kubelet[3253]: E0115 12:51:05.220680 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.220911 kubelet[3253]: E0115 12:51:05.220892 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.220911 kubelet[3253]: W0115 12:51:05.220908 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.221008 kubelet[3253]: E0115 12:51:05.220988 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.221108 kubelet[3253]: E0115 12:51:05.221089 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.221108 kubelet[3253]: W0115 12:51:05.221103 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.221247 kubelet[3253]: E0115 12:51:05.221184 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.221247 kubelet[3253]: E0115 12:51:05.221238 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.221247 kubelet[3253]: W0115 12:51:05.221248 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.221345 kubelet[3253]: E0115 12:51:05.221264 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.221451 kubelet[3253]: E0115 12:51:05.221436 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.221451 kubelet[3253]: W0115 12:51:05.221449 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.221509 kubelet[3253]: E0115 12:51:05.221462 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.221704 kubelet[3253]: E0115 12:51:05.221681 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.221704 kubelet[3253]: W0115 12:51:05.221697 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.221825 kubelet[3253]: E0115 12:51:05.221713 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.222161 kubelet[3253]: E0115 12:51:05.222139 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.222161 kubelet[3253]: W0115 12:51:05.222155 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.222237 kubelet[3253]: E0115 12:51:05.222172 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.222371 kubelet[3253]: E0115 12:51:05.222353 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.222371 kubelet[3253]: W0115 12:51:05.222369 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.222437 kubelet[3253]: E0115 12:51:05.222380 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.222552 kubelet[3253]: E0115 12:51:05.222536 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.222552 kubelet[3253]: W0115 12:51:05.222550 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.222616 kubelet[3253]: E0115 12:51:05.222559 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.222709 kubelet[3253]: E0115 12:51:05.222694 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.222709 kubelet[3253]: W0115 12:51:05.222707 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.222849 kubelet[3253]: E0115 12:51:05.222757 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.223022 kubelet[3253]: E0115 12:51:05.223003 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.223022 kubelet[3253]: W0115 12:51:05.223020 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.223076 kubelet[3253]: E0115 12:51:05.223031 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.223515 kubelet[3253]: E0115 12:51:05.223497 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.223515 kubelet[3253]: W0115 12:51:05.223512 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.223586 kubelet[3253]: E0115 12:51:05.223524 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.954538 kubelet[3253]: E0115 12:51:05.954170 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:06.075142 kubelet[3253]: I0115 12:51:06.074445 3253 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:06.129139 kubelet[3253]: E0115 12:51:06.129104 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.129139 kubelet[3253]: W0115 12:51:06.129130 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.129506 kubelet[3253]: E0115 12:51:06.129347 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.129910 kubelet[3253]: E0115 12:51:06.129879 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.130032 kubelet[3253]: W0115 12:51:06.130008 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.130122 kubelet[3253]: E0115 12:51:06.130034 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.130589 kubelet[3253]: E0115 12:51:06.130356 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.130652 kubelet[3253]: W0115 12:51:06.130590 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.130652 kubelet[3253]: E0115 12:51:06.130607 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.130909 kubelet[3253]: E0115 12:51:06.130889 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.130909 kubelet[3253]: W0115 12:51:06.130904 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.130989 kubelet[3253]: E0115 12:51:06.130914 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.131645 kubelet[3253]: E0115 12:51:06.131602 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.131645 kubelet[3253]: W0115 12:51:06.131638 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.131741 kubelet[3253]: E0115 12:51:06.131651 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.131866 kubelet[3253]: E0115 12:51:06.131850 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.131866 kubelet[3253]: W0115 12:51:06.131865 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.131999 kubelet[3253]: E0115 12:51:06.131875 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.132261 kubelet[3253]: E0115 12:51:06.132242 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.132261 kubelet[3253]: W0115 12:51:06.132257 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.132362 kubelet[3253]: E0115 12:51:06.132276 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.132589 kubelet[3253]: E0115 12:51:06.132571 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.132589 kubelet[3253]: W0115 12:51:06.132587 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.132647 kubelet[3253]: E0115 12:51:06.132598 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.132905 kubelet[3253]: E0115 12:51:06.132886 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.132976 kubelet[3253]: W0115 12:51:06.132913 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.132976 kubelet[3253]: E0115 12:51:06.132925 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.133106 kubelet[3253]: E0115 12:51:06.133088 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.133106 kubelet[3253]: W0115 12:51:06.133101 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.133190 kubelet[3253]: E0115 12:51:06.133110 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.133275 kubelet[3253]: E0115 12:51:06.133258 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.133308 kubelet[3253]: W0115 12:51:06.133275 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.133308 kubelet[3253]: E0115 12:51:06.133284 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.133475 kubelet[3253]: E0115 12:51:06.133458 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.133475 kubelet[3253]: W0115 12:51:06.133471 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.133539 kubelet[3253]: E0115 12:51:06.133481 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.133800 kubelet[3253]: E0115 12:51:06.133773 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.133800 kubelet[3253]: W0115 12:51:06.133784 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.133800 kubelet[3253]: E0115 12:51:06.133794 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.133996 kubelet[3253]: E0115 12:51:06.133978 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.133996 kubelet[3253]: W0115 12:51:06.133993 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.134086 kubelet[3253]: E0115 12:51:06.134003 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.134193 kubelet[3253]: E0115 12:51:06.134166 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.134193 kubelet[3253]: W0115 12:51:06.134180 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.134250 kubelet[3253]: E0115 12:51:06.134200 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.230150 kubelet[3253]: E0115 12:51:06.229961 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.230150 kubelet[3253]: W0115 12:51:06.229985 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.230150 kubelet[3253]: E0115 12:51:06.230005 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.230544 kubelet[3253]: E0115 12:51:06.230458 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.230544 kubelet[3253]: W0115 12:51:06.230471 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.230544 kubelet[3253]: E0115 12:51:06.230491 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.230993 kubelet[3253]: E0115 12:51:06.230965 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.231192 kubelet[3253]: W0115 12:51:06.230984 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.231192 kubelet[3253]: E0115 12:51:06.231021 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.232393 kubelet[3253]: E0115 12:51:06.232365 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.232393 kubelet[3253]: W0115 12:51:06.232383 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.232652 kubelet[3253]: E0115 12:51:06.232400 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.232652 kubelet[3253]: E0115 12:51:06.232589 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.232652 kubelet[3253]: W0115 12:51:06.232598 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.232831 kubelet[3253]: E0115 12:51:06.232677 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.232831 kubelet[3253]: E0115 12:51:06.232825 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.232887 kubelet[3253]: W0115 12:51:06.232833 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.233211 kubelet[3253]: E0115 12:51:06.233160 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.233398 kubelet[3253]: E0115 12:51:06.233364 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.233398 kubelet[3253]: W0115 12:51:06.233380 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.233484 kubelet[3253]: E0115 12:51:06.233472 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.233611 kubelet[3253]: E0115 12:51:06.233589 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.233611 kubelet[3253]: W0115 12:51:06.233606 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.233696 kubelet[3253]: E0115 12:51:06.233619 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.234026 kubelet[3253]: E0115 12:51:06.233997 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.234026 kubelet[3253]: W0115 12:51:06.234015 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.234026 kubelet[3253]: E0115 12:51:06.234029 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.234233 kubelet[3253]: E0115 12:51:06.234201 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.234233 kubelet[3253]: W0115 12:51:06.234219 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.234297 kubelet[3253]: E0115 12:51:06.234235 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.234500 kubelet[3253]: E0115 12:51:06.234482 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.234500 kubelet[3253]: W0115 12:51:06.234498 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.234556 kubelet[3253]: E0115 12:51:06.234548 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.234807 kubelet[3253]: E0115 12:51:06.234782 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.234807 kubelet[3253]: W0115 12:51:06.234801 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.235017 kubelet[3253]: E0115 12:51:06.234894 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.235058 kubelet[3253]: E0115 12:51:06.235023 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.235058 kubelet[3253]: W0115 12:51:06.235031 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.235058 kubelet[3253]: E0115 12:51:06.235045 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.235214 kubelet[3253]: E0115 12:51:06.235194 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.235214 kubelet[3253]: W0115 12:51:06.235209 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.235295 kubelet[3253]: E0115 12:51:06.235225 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.235425 kubelet[3253]: E0115 12:51:06.235398 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.235425 kubelet[3253]: W0115 12:51:06.235414 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.235481 kubelet[3253]: E0115 12:51:06.235427 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.235914 kubelet[3253]: E0115 12:51:06.235889 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.235914 kubelet[3253]: W0115 12:51:06.235909 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.237342 kubelet[3253]: E0115 12:51:06.235930 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.237342 kubelet[3253]: E0115 12:51:06.236164 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.237342 kubelet[3253]: W0115 12:51:06.236174 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.237342 kubelet[3253]: E0115 12:51:06.236184 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.237342 kubelet[3253]: E0115 12:51:06.236528 3253 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:06.237342 kubelet[3253]: W0115 12:51:06.236539 3253 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:06.237342 kubelet[3253]: E0115 12:51:06.236550 3253 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:06.253598 containerd[1702]: time="2025-01-15T12:51:06.252904670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:06.256117 containerd[1702]: time="2025-01-15T12:51:06.255900914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 15 12:51:06.262023 containerd[1702]: time="2025-01-15T12:51:06.260068279Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:06.265662 containerd[1702]: time="2025-01-15T12:51:06.265603125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:06.266480 containerd[1702]: time="2025-01-15T12:51:06.266446926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.628588527s" Jan 15 12:51:06.266587 containerd[1702]: time="2025-01-15T12:51:06.266568486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 15 12:51:06.270356 containerd[1702]: time="2025-01-15T12:51:06.270299771Z" level=info msg="CreateContainer within sandbox \"02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 12:51:06.302150 containerd[1702]: time="2025-01-15T12:51:06.302097169Z" level=info msg="CreateContainer within sandbox \"02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009\"" Jan 15 12:51:06.302975 containerd[1702]: time="2025-01-15T12:51:06.302932690Z" level=info msg="StartContainer for \"814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009\"" Jan 15 12:51:06.333911 systemd[1]: Started cri-containerd-814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009.scope - libcontainer container 814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009. Jan 15 12:51:06.360294 containerd[1702]: time="2025-01-15T12:51:06.360246080Z" level=info msg="StartContainer for \"814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009\" returns successfully" Jan 15 12:51:06.371743 systemd[1]: cri-containerd-814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009.scope: Deactivated successfully. Jan 15 12:51:06.642836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009-rootfs.mount: Deactivated successfully. Jan 15 12:51:07.096186 kubelet[3253]: I0115 12:51:07.094922 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c45775887-j2kmd" podStartSLOduration=3.9001206760000002 podStartE2EDuration="6.094901327s" podCreationTimestamp="2025-01-15 12:51:01 +0000 UTC" firstStartedPulling="2025-01-15 12:51:02.442838747 +0000 UTC m=+22.608322097" lastFinishedPulling="2025-01-15 12:51:04.637619398 +0000 UTC m=+24.803102748" observedRunningTime="2025-01-15 12:51:05.088620103 +0000 UTC m=+25.254103413" watchObservedRunningTime="2025-01-15 12:51:07.094901327 +0000 UTC m=+27.260384677" Jan 15 12:51:07.300972 containerd[1702]: time="2025-01-15T12:51:07.300914656Z" level=info msg="shim disconnected" id=814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009 namespace=k8s.io Jan 15 12:51:07.301534 containerd[1702]: time="2025-01-15T12:51:07.301372337Z" level=warning msg="cleaning up after shim disconnected" id=814009a40546e3923fd838318b67abc6dfb1cec388b54af82205a5e846695009 namespace=k8s.io Jan 15 12:51:07.301534 containerd[1702]: time="2025-01-15T12:51:07.301396257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:51:07.954674 kubelet[3253]: E0115 12:51:07.954024 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:08.083034 containerd[1702]: time="2025-01-15T12:51:08.082995841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 15 12:51:09.954294 kubelet[3253]: E0115 12:51:09.954245 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:11.953443 kubelet[3253]: E0115 12:51:11.953197 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:12.438513 containerd[1702]: time="2025-01-15T12:51:12.437749425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:12.439794 containerd[1702]: time="2025-01-15T12:51:12.439758467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 15 12:51:12.443627 containerd[1702]: time="2025-01-15T12:51:12.443575632Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:12.448106 containerd[1702]: time="2025-01-15T12:51:12.448074037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:12.450747 containerd[1702]: time="2025-01-15T12:51:12.449989959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.366720918s" Jan 15 12:51:12.450747 containerd[1702]: time="2025-01-15T12:51:12.450029479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 15 12:51:12.455832 containerd[1702]: time="2025-01-15T12:51:12.455792166Z" level=info msg="CreateContainer within sandbox \"02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 12:51:12.493876 containerd[1702]: time="2025-01-15T12:51:12.493829652Z" level=info msg="CreateContainer within sandbox \"02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3\"" Jan 15 12:51:12.495832 containerd[1702]: time="2025-01-15T12:51:12.494491813Z" level=info msg="StartContainer for \"ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3\"" Jan 15 12:51:12.524918 systemd[1]: Started cri-containerd-ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3.scope - libcontainer container ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3. Jan 15 12:51:12.551613 containerd[1702]: time="2025-01-15T12:51:12.551573962Z" level=info msg="StartContainer for \"ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3\" returns successfully" Jan 15 12:51:13.625547 systemd[1]: cri-containerd-ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3.scope: Deactivated successfully. Jan 15 12:51:13.645831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3-rootfs.mount: Deactivated successfully. Jan 15 12:51:13.710397 kubelet[3253]: I0115 12:51:13.709020 3253 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 15 12:51:13.984656 kubelet[3253]: I0115 12:51:13.745209 3253 topology_manager.go:215] "Topology Admit Handler" podUID="f965f94f-e6f0-4cbe-9d52-87cecc0ddc61" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nh4f9" Jan 15 12:51:13.984656 kubelet[3253]: I0115 12:51:13.761642 3253 topology_manager.go:215] "Topology Admit Handler" podUID="14500a27-f3f6-499b-94fc-b167797dfc9f" podNamespace="calico-apiserver" podName="calico-apiserver-599d8cb64-xk5xp" Jan 15 12:51:13.984656 kubelet[3253]: I0115 12:51:13.764560 3253 topology_manager.go:215] "Topology Admit Handler" podUID="25df612d-7c87-41cf-a49d-92de2f6930d0" podNamespace="calico-apiserver" podName="calico-apiserver-599d8cb64-5f2nt" Jan 15 12:51:13.984656 kubelet[3253]: I0115 12:51:13.764891 3253 topology_manager.go:215] "Topology Admit Handler" podUID="39441957-8677-4d6c-967c-9eea47f5a2b2" podNamespace="calico-system" podName="calico-kube-controllers-67c6b9d78c-8b976" Jan 15 12:51:13.984656 kubelet[3253]: I0115 12:51:13.765643 3253 topology_manager.go:215] "Topology Admit Handler" podUID="c4e3b7d0-351b-4511-9466-e0a6b59b0959" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9b4kr" Jan 15 12:51:13.984656 kubelet[3253]: I0115 12:51:13.883098 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2b9\" (UniqueName: \"kubernetes.io/projected/14500a27-f3f6-499b-94fc-b167797dfc9f-kube-api-access-pj2b9\") pod \"calico-apiserver-599d8cb64-xk5xp\" (UID: \"14500a27-f3f6-499b-94fc-b167797dfc9f\") " pod="calico-apiserver/calico-apiserver-599d8cb64-xk5xp" Jan 15 12:51:13.984656 kubelet[3253]: I0115 12:51:13.883144 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4e3b7d0-351b-4511-9466-e0a6b59b0959-config-volume\") pod \"coredns-7db6d8ff4d-9b4kr\" (UID: \"c4e3b7d0-351b-4511-9466-e0a6b59b0959\") " pod="kube-system/coredns-7db6d8ff4d-9b4kr" Jan 15 12:51:13.753393 systemd[1]: Created slice kubepods-burstable-podf965f94f_e6f0_4cbe_9d52_87cecc0ddc61.slice - libcontainer container kubepods-burstable-podf965f94f_e6f0_4cbe_9d52_87cecc0ddc61.slice. Jan 15 12:51:13.985137 kubelet[3253]: I0115 12:51:13.883167 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/25df612d-7c87-41cf-a49d-92de2f6930d0-calico-apiserver-certs\") pod \"calico-apiserver-599d8cb64-5f2nt\" (UID: \"25df612d-7c87-41cf-a49d-92de2f6930d0\") " pod="calico-apiserver/calico-apiserver-599d8cb64-5f2nt" Jan 15 12:51:13.985137 kubelet[3253]: I0115 12:51:13.883185 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f965f94f-e6f0-4cbe-9d52-87cecc0ddc61-config-volume\") pod \"coredns-7db6d8ff4d-nh4f9\" (UID: \"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61\") " pod="kube-system/coredns-7db6d8ff4d-nh4f9" Jan 15 12:51:13.985137 kubelet[3253]: I0115 12:51:13.883201 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14500a27-f3f6-499b-94fc-b167797dfc9f-calico-apiserver-certs\") pod \"calico-apiserver-599d8cb64-xk5xp\" (UID: \"14500a27-f3f6-499b-94fc-b167797dfc9f\") " pod="calico-apiserver/calico-apiserver-599d8cb64-xk5xp" Jan 15 12:51:13.985137 kubelet[3253]: I0115 12:51:13.883220 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2272v\" (UniqueName: \"kubernetes.io/projected/f965f94f-e6f0-4cbe-9d52-87cecc0ddc61-kube-api-access-2272v\") pod \"coredns-7db6d8ff4d-nh4f9\" (UID: \"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61\") " pod="kube-system/coredns-7db6d8ff4d-nh4f9" Jan 15 12:51:13.985137 kubelet[3253]: I0115 12:51:13.883237 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldt7\" (UniqueName: \"kubernetes.io/projected/c4e3b7d0-351b-4511-9466-e0a6b59b0959-kube-api-access-xldt7\") pod \"coredns-7db6d8ff4d-9b4kr\" (UID: \"c4e3b7d0-351b-4511-9466-e0a6b59b0959\") " pod="kube-system/coredns-7db6d8ff4d-9b4kr" Jan 15 12:51:13.774076 systemd[1]: Created slice kubepods-besteffort-pod14500a27_f3f6_499b_94fc_b167797dfc9f.slice - libcontainer container kubepods-besteffort-pod14500a27_f3f6_499b_94fc_b167797dfc9f.slice. Jan 15 12:51:13.985346 kubelet[3253]: I0115 12:51:13.883257 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtwb\" (UniqueName: \"kubernetes.io/projected/25df612d-7c87-41cf-a49d-92de2f6930d0-kube-api-access-5qtwb\") pod \"calico-apiserver-599d8cb64-5f2nt\" (UID: \"25df612d-7c87-41cf-a49d-92de2f6930d0\") " pod="calico-apiserver/calico-apiserver-599d8cb64-5f2nt" Jan 15 12:51:13.985346 kubelet[3253]: I0115 12:51:13.883273 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39441957-8677-4d6c-967c-9eea47f5a2b2-tigera-ca-bundle\") pod \"calico-kube-controllers-67c6b9d78c-8b976\" (UID: \"39441957-8677-4d6c-967c-9eea47f5a2b2\") " pod="calico-system/calico-kube-controllers-67c6b9d78c-8b976" Jan 15 12:51:13.985346 kubelet[3253]: I0115 12:51:13.883290 3253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlgkz\" (UniqueName: \"kubernetes.io/projected/39441957-8677-4d6c-967c-9eea47f5a2b2-kube-api-access-rlgkz\") pod \"calico-kube-controllers-67c6b9d78c-8b976\" (UID: \"39441957-8677-4d6c-967c-9eea47f5a2b2\") " pod="calico-system/calico-kube-controllers-67c6b9d78c-8b976" Jan 15 12:51:13.782475 systemd[1]: Created slice kubepods-besteffort-pod25df612d_7c87_41cf_a49d_92de2f6930d0.slice - libcontainer container kubepods-besteffort-pod25df612d_7c87_41cf_a49d_92de2f6930d0.slice. Jan 15 12:51:13.790137 systemd[1]: Created slice kubepods-besteffort-pod39441957_8677_4d6c_967c_9eea47f5a2b2.slice - libcontainer container kubepods-besteffort-pod39441957_8677_4d6c_967c_9eea47f5a2b2.slice. Jan 15 12:51:13.796089 systemd[1]: Created slice kubepods-burstable-podc4e3b7d0_351b_4511_9466_e0a6b59b0959.slice - libcontainer container kubepods-burstable-podc4e3b7d0_351b_4511_9466_e0a6b59b0959.slice. Jan 15 12:51:13.963660 systemd[1]: Created slice kubepods-besteffort-podb15694bf_ebfd_4bd2_8276_f46e85d79323.slice - libcontainer container kubepods-besteffort-podb15694bf_ebfd_4bd2_8276_f46e85d79323.slice. Jan 15 12:51:14.002325 containerd[1702]: time="2025-01-15T12:51:14.002000115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw7xv,Uid:b15694bf-ebfd-4bd2-8276-f46e85d79323,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:14.290020 containerd[1702]: time="2025-01-15T12:51:14.289909663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nh4f9,Uid:f965f94f-e6f0-4cbe-9d52-87cecc0ddc61,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:14.299447 containerd[1702]: time="2025-01-15T12:51:14.299237035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9b4kr,Uid:c4e3b7d0-351b-4511-9466-e0a6b59b0959,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:14.299447 containerd[1702]: time="2025-01-15T12:51:14.299279035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-5f2nt,Uid:25df612d-7c87-41cf-a49d-92de2f6930d0,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:51:14.299447 containerd[1702]: time="2025-01-15T12:51:14.299236515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c6b9d78c-8b976,Uid:39441957-8677-4d6c-967c-9eea47f5a2b2,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:14.299980 containerd[1702]: time="2025-01-15T12:51:14.299784315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-xk5xp,Uid:14500a27-f3f6-499b-94fc-b167797dfc9f,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:51:14.835772 containerd[1702]: time="2025-01-15T12:51:14.835670603Z" level=info msg="shim disconnected" id=ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3 namespace=k8s.io Jan 15 12:51:14.835772 containerd[1702]: time="2025-01-15T12:51:14.835742963Z" level=warning msg="cleaning up after shim disconnected" id=ddf1f98d580fe9574c14f630a6d68e05d172728bacfe9d1237098d01f0f625c3 namespace=k8s.io Jan 15 12:51:14.835772 containerd[1702]: time="2025-01-15T12:51:14.835752163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:51:14.997169 containerd[1702]: time="2025-01-15T12:51:14.996949878Z" level=error msg="Failed to destroy network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:14.998884 containerd[1702]: time="2025-01-15T12:51:14.998831080Z" level=error msg="encountered an error cleaning up failed sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:14.999388 containerd[1702]: time="2025-01-15T12:51:14.999358761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw7xv,Uid:b15694bf-ebfd-4bd2-8276-f46e85d79323,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.001550 kubelet[3253]: E0115 12:51:15.000203 3253 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.001550 kubelet[3253]: E0115 12:51:15.000273 3253 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw7xv" Jan 15 12:51:15.001550 kubelet[3253]: E0115 12:51:15.000293 3253 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw7xv" Jan 15 12:51:15.001894 kubelet[3253]: E0115 12:51:15.000349 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kw7xv_calico-system(b15694bf-ebfd-4bd2-8276-f46e85d79323)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kw7xv_calico-system(b15694bf-ebfd-4bd2-8276-f46e85d79323)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:15.101918 containerd[1702]: time="2025-01-15T12:51:15.101796845Z" level=error msg="Failed to destroy network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.103246 containerd[1702]: time="2025-01-15T12:51:15.103012246Z" level=error msg="encountered an error cleaning up failed sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.103246 containerd[1702]: time="2025-01-15T12:51:15.103071006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nh4f9,Uid:f965f94f-e6f0-4cbe-9d52-87cecc0ddc61,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.105017 kubelet[3253]: E0115 12:51:15.103368 3253 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.105017 kubelet[3253]: E0115 12:51:15.103418 3253 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nh4f9" Jan 15 12:51:15.105017 kubelet[3253]: E0115 12:51:15.103451 3253 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nh4f9" Jan 15 12:51:15.105241 kubelet[3253]: E0115 12:51:15.103495 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nh4f9_kube-system(f965f94f-e6f0-4cbe-9d52-87cecc0ddc61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nh4f9_kube-system(f965f94f-e6f0-4cbe-9d52-87cecc0ddc61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nh4f9" podUID="f965f94f-e6f0-4cbe-9d52-87cecc0ddc61" Jan 15 12:51:15.111313 containerd[1702]: time="2025-01-15T12:51:15.110016935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 15 12:51:15.112601 kubelet[3253]: I0115 12:51:15.111995 3253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:15.116770 containerd[1702]: time="2025-01-15T12:51:15.114649940Z" level=info msg="StopPodSandbox for \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\"" Jan 15 12:51:15.116984 containerd[1702]: time="2025-01-15T12:51:15.116962463Z" level=info msg="Ensure that sandbox 6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078 in task-service has been cleanup successfully" Jan 15 12:51:15.126120 containerd[1702]: time="2025-01-15T12:51:15.126064314Z" level=error msg="Failed to destroy network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.126741 containerd[1702]: time="2025-01-15T12:51:15.126378874Z" level=error msg="encountered an error cleaning up failed sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.126741 containerd[1702]: time="2025-01-15T12:51:15.126429554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-5f2nt,Uid:25df612d-7c87-41cf-a49d-92de2f6930d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.126867 kubelet[3253]: E0115 12:51:15.126674 3253 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.126867 kubelet[3253]: E0115 12:51:15.126787 3253 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599d8cb64-5f2nt" Jan 15 12:51:15.126867 kubelet[3253]: E0115 12:51:15.126810 3253 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599d8cb64-5f2nt" Jan 15 12:51:15.126966 kubelet[3253]: E0115 12:51:15.126849 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599d8cb64-5f2nt_calico-apiserver(25df612d-7c87-41cf-a49d-92de2f6930d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599d8cb64-5f2nt_calico-apiserver(25df612d-7c87-41cf-a49d-92de2f6930d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599d8cb64-5f2nt" podUID="25df612d-7c87-41cf-a49d-92de2f6930d0" Jan 15 12:51:15.152061 containerd[1702]: time="2025-01-15T12:51:15.151932545Z" level=error msg="Failed to destroy network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.152489 containerd[1702]: time="2025-01-15T12:51:15.152456186Z" level=error msg="encountered an error cleaning up failed sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.152591 containerd[1702]: time="2025-01-15T12:51:15.152570706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c6b9d78c-8b976,Uid:39441957-8677-4d6c-967c-9eea47f5a2b2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.152903 containerd[1702]: time="2025-01-15T12:51:15.152858106Z" level=error msg="Failed to destroy network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.152965 kubelet[3253]: E0115 12:51:15.152921 3253 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.153006 kubelet[3253]: E0115 12:51:15.152972 3253 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c6b9d78c-8b976" Jan 15 12:51:15.153006 kubelet[3253]: E0115 12:51:15.152995 3253 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c6b9d78c-8b976" Jan 15 12:51:15.153060 kubelet[3253]: E0115 12:51:15.153039 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67c6b9d78c-8b976_calico-system(39441957-8677-4d6c-967c-9eea47f5a2b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67c6b9d78c-8b976_calico-system(39441957-8677-4d6c-967c-9eea47f5a2b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67c6b9d78c-8b976" podUID="39441957-8677-4d6c-967c-9eea47f5a2b2" Jan 15 12:51:15.155347 containerd[1702]: time="2025-01-15T12:51:15.154904509Z" level=error msg="encountered an error cleaning up failed sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.155347 containerd[1702]: time="2025-01-15T12:51:15.154956949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9b4kr,Uid:c4e3b7d0-351b-4511-9466-e0a6b59b0959,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.155443 kubelet[3253]: E0115 12:51:15.155218 3253 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.155443 kubelet[3253]: E0115 12:51:15.155258 3253 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9b4kr" Jan 15 12:51:15.155443 kubelet[3253]: E0115 12:51:15.155274 3253 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9b4kr" Jan 15 12:51:15.155524 kubelet[3253]: E0115 12:51:15.155314 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9b4kr_kube-system(c4e3b7d0-351b-4511-9466-e0a6b59b0959)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9b4kr_kube-system(c4e3b7d0-351b-4511-9466-e0a6b59b0959)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9b4kr" podUID="c4e3b7d0-351b-4511-9466-e0a6b59b0959" Jan 15 12:51:15.157622 containerd[1702]: time="2025-01-15T12:51:15.157594432Z" level=error msg="Failed to destroy network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.158238 containerd[1702]: time="2025-01-15T12:51:15.157944873Z" level=error msg="encountered an error cleaning up failed sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.158238 containerd[1702]: time="2025-01-15T12:51:15.157995873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-xk5xp,Uid:14500a27-f3f6-499b-94fc-b167797dfc9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.158353 kubelet[3253]: E0115 12:51:15.158118 3253 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.158353 kubelet[3253]: E0115 12:51:15.158154 3253 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599d8cb64-xk5xp" Jan 15 12:51:15.158353 kubelet[3253]: E0115 12:51:15.158169 3253 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599d8cb64-xk5xp" Jan 15 12:51:15.158429 kubelet[3253]: E0115 12:51:15.158201 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599d8cb64-xk5xp_calico-apiserver(14500a27-f3f6-499b-94fc-b167797dfc9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599d8cb64-xk5xp_calico-apiserver(14500a27-f3f6-499b-94fc-b167797dfc9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599d8cb64-xk5xp" podUID="14500a27-f3f6-499b-94fc-b167797dfc9f" Jan 15 12:51:15.168789 containerd[1702]: time="2025-01-15T12:51:15.168712366Z" level=error msg="StopPodSandbox for \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\" failed" error="failed to destroy network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:15.169010 kubelet[3253]: E0115 12:51:15.168927 3253 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:15.169086 kubelet[3253]: E0115 12:51:15.168977 3253 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078"} Jan 15 12:51:15.169086 kubelet[3253]: E0115 12:51:15.169036 3253 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b15694bf-ebfd-4bd2-8276-f46e85d79323\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:15.169086 kubelet[3253]: E0115 12:51:15.169057 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b15694bf-ebfd-4bd2-8276-f46e85d79323\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw7xv" podUID="b15694bf-ebfd-4bd2-8276-f46e85d79323" Jan 15 12:51:15.647234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf-shm.mount: Deactivated successfully. Jan 15 12:51:15.647330 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1-shm.mount: Deactivated successfully. Jan 15 12:51:15.647382 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078-shm.mount: Deactivated successfully. Jan 15 12:51:16.114462 kubelet[3253]: I0115 12:51:16.114421 3253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:16.115824 containerd[1702]: time="2025-01-15T12:51:16.115186190Z" level=info msg="StopPodSandbox for \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\"" Jan 15 12:51:16.115824 containerd[1702]: time="2025-01-15T12:51:16.115394150Z" level=info msg="Ensure that sandbox 1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33 in task-service has been cleanup successfully" Jan 15 12:51:16.117898 kubelet[3253]: I0115 12:51:16.117832 3253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:16.118910 containerd[1702]: time="2025-01-15T12:51:16.118865754Z" level=info msg="StopPodSandbox for \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\"" Jan 15 12:51:16.119287 containerd[1702]: time="2025-01-15T12:51:16.119157514Z" level=info msg="Ensure that sandbox 940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe in task-service has been cleanup successfully" Jan 15 12:51:16.120267 kubelet[3253]: I0115 12:51:16.119774 3253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:16.120563 containerd[1702]: time="2025-01-15T12:51:16.120538956Z" level=info msg="StopPodSandbox for \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\"" Jan 15 12:51:16.120808 containerd[1702]: time="2025-01-15T12:51:16.120786956Z" level=info msg="Ensure that sandbox 95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf in task-service has been cleanup successfully" Jan 15 12:51:16.122595 kubelet[3253]: I0115 12:51:16.122559 3253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:16.124628 containerd[1702]: time="2025-01-15T12:51:16.124603121Z" level=info msg="StopPodSandbox for \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\"" Jan 15 12:51:16.124944 containerd[1702]: time="2025-01-15T12:51:16.124908081Z" level=info msg="Ensure that sandbox f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82 in task-service has been cleanup successfully" Jan 15 12:51:16.128931 kubelet[3253]: I0115 12:51:16.128296 3253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:16.129781 containerd[1702]: time="2025-01-15T12:51:16.129625407Z" level=info msg="StopPodSandbox for \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\"" Jan 15 12:51:16.130439 containerd[1702]: time="2025-01-15T12:51:16.130189808Z" level=info msg="Ensure that sandbox d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1 in task-service has been cleanup successfully" Jan 15 12:51:16.188896 containerd[1702]: time="2025-01-15T12:51:16.188838279Z" level=error msg="StopPodSandbox for \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\" failed" error="failed to destroy network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.189246 kubelet[3253]: E0115 12:51:16.189195 3253 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:16.189392 kubelet[3253]: E0115 12:51:16.189367 3253 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe"} Jan 15 12:51:16.189492 kubelet[3253]: E0115 12:51:16.189477 3253 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25df612d-7c87-41cf-a49d-92de2f6930d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:16.189665 kubelet[3253]: E0115 12:51:16.189563 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25df612d-7c87-41cf-a49d-92de2f6930d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599d8cb64-5f2nt" podUID="25df612d-7c87-41cf-a49d-92de2f6930d0" Jan 15 12:51:16.196608 containerd[1702]: time="2025-01-15T12:51:16.196355368Z" level=error msg="StopPodSandbox for \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\" failed" error="failed to destroy network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.196705 kubelet[3253]: E0115 12:51:16.196587 3253 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:16.196705 kubelet[3253]: E0115 12:51:16.196643 3253 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82"} Jan 15 12:51:16.196705 kubelet[3253]: E0115 12:51:16.196673 3253 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14500a27-f3f6-499b-94fc-b167797dfc9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:16.196705 kubelet[3253]: E0115 12:51:16.196695 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14500a27-f3f6-499b-94fc-b167797dfc9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599d8cb64-xk5xp" podUID="14500a27-f3f6-499b-94fc-b167797dfc9f" Jan 15 12:51:16.198682 containerd[1702]: time="2025-01-15T12:51:16.198278450Z" level=error msg="StopPodSandbox for \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\" failed" error="failed to destroy network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.198791 kubelet[3253]: E0115 12:51:16.198446 3253 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:16.198791 kubelet[3253]: E0115 12:51:16.198479 3253 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33"} Jan 15 12:51:16.198791 kubelet[3253]: E0115 12:51:16.198505 3253 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39441957-8677-4d6c-967c-9eea47f5a2b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:16.198791 kubelet[3253]: E0115 12:51:16.198527 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39441957-8677-4d6c-967c-9eea47f5a2b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67c6b9d78c-8b976" podUID="39441957-8677-4d6c-967c-9eea47f5a2b2" Jan 15 12:51:16.201017 containerd[1702]: time="2025-01-15T12:51:16.200984093Z" level=error msg="StopPodSandbox for \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\" failed" error="failed to destroy network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.201272 kubelet[3253]: E0115 12:51:16.201237 3253 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:16.201935 kubelet[3253]: E0115 12:51:16.201485 3253 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf"} Jan 15 12:51:16.201935 kubelet[3253]: E0115 12:51:16.201524 3253 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e3b7d0-351b-4511-9466-e0a6b59b0959\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:16.201935 kubelet[3253]: E0115 12:51:16.201542 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e3b7d0-351b-4511-9466-e0a6b59b0959\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9b4kr" podUID="c4e3b7d0-351b-4511-9466-e0a6b59b0959" Jan 15 12:51:16.203020 containerd[1702]: time="2025-01-15T12:51:16.202984536Z" level=error msg="StopPodSandbox for \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\" failed" error="failed to destroy network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.203323 kubelet[3253]: E0115 12:51:16.203143 3253 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:16.203323 kubelet[3253]: E0115 12:51:16.203176 3253 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1"} Jan 15 12:51:16.203323 kubelet[3253]: E0115 12:51:16.203203 3253 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:16.203323 kubelet[3253]: E0115 12:51:16.203221 3253 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nh4f9" podUID="f965f94f-e6f0-4cbe-9d52-87cecc0ddc61" Jan 15 12:51:22.263999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072332505.mount: Deactivated successfully. Jan 15 12:51:22.611220 containerd[1702]: time="2025-01-15T12:51:22.611092427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:22.613180 containerd[1702]: time="2025-01-15T12:51:22.613127630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 15 12:51:22.616605 containerd[1702]: time="2025-01-15T12:51:22.616552474Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:22.620790 containerd[1702]: time="2025-01-15T12:51:22.620701279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:22.621692 containerd[1702]: time="2025-01-15T12:51:22.621247999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.511197024s" Jan 15 12:51:22.621692 containerd[1702]: time="2025-01-15T12:51:22.621287599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 15 12:51:22.635119 containerd[1702]: time="2025-01-15T12:51:22.635071096Z" level=info msg="CreateContainer within sandbox \"02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 12:51:22.669190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038454696.mount: Deactivated successfully. Jan 15 12:51:22.679478 containerd[1702]: time="2025-01-15T12:51:22.679383908Z" level=info msg="CreateContainer within sandbox \"02e9964dcb316310d33739084894f5e6261ca4aa3e3e23e5c461f5581b4fad9d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3cc874d97a339c564f2981f7fffa13862cb94709b0942829a2fe34ef82f5900\"" Jan 15 12:51:22.681417 containerd[1702]: time="2025-01-15T12:51:22.680026949Z" level=info msg="StartContainer for \"a3cc874d97a339c564f2981f7fffa13862cb94709b0942829a2fe34ef82f5900\"" Jan 15 12:51:22.708915 systemd[1]: Started cri-containerd-a3cc874d97a339c564f2981f7fffa13862cb94709b0942829a2fe34ef82f5900.scope - libcontainer container a3cc874d97a339c564f2981f7fffa13862cb94709b0942829a2fe34ef82f5900. Jan 15 12:51:22.740231 containerd[1702]: time="2025-01-15T12:51:22.740188741Z" level=info msg="StartContainer for \"a3cc874d97a339c564f2981f7fffa13862cb94709b0942829a2fe34ef82f5900\" returns successfully" Jan 15 12:51:23.030128 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 12:51:23.030252 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 12:51:26.954237 containerd[1702]: time="2025-01-15T12:51:26.954181454Z" level=info msg="StopPodSandbox for \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\"" Jan 15 12:51:27.012515 kubelet[3253]: I0115 12:51:27.012025 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vggn8" podStartSLOduration=4.962788347 podStartE2EDuration="25.012005684s" podCreationTimestamp="2025-01-15 12:51:02 +0000 UTC" firstStartedPulling="2025-01-15 12:51:02.573052704 +0000 UTC m=+22.738536054" lastFinishedPulling="2025-01-15 12:51:22.622270041 +0000 UTC m=+42.787753391" observedRunningTime="2025-01-15 12:51:23.179059541 +0000 UTC m=+43.344542891" watchObservedRunningTime="2025-01-15 12:51:27.012005684 +0000 UTC m=+47.177489034" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.012 [INFO][4575] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.013 [INFO][4575] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" iface="eth0" netns="/var/run/netns/cni-770bdfa7-ea43-3cfb-ecd4-3d966ee443e1" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.014 [INFO][4575] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" iface="eth0" netns="/var/run/netns/cni-770bdfa7-ea43-3cfb-ecd4-3d966ee443e1" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.015 [INFO][4575] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" iface="eth0" netns="/var/run/netns/cni-770bdfa7-ea43-3cfb-ecd4-3d966ee443e1" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.015 [INFO][4575] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.015 [INFO][4575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.033 [INFO][4581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.033 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.033 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.041 [WARNING][4581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.041 [INFO][4581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.042 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:27.047150 containerd[1702]: 2025-01-15 12:51:27.045 [INFO][4575] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:27.049968 containerd[1702]: time="2025-01-15T12:51:27.049922009Z" level=info msg="TearDown network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\" successfully" Jan 15 12:51:27.049968 containerd[1702]: time="2025-01-15T12:51:27.049959769Z" level=info msg="StopPodSandbox for \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\" returns successfully" Jan 15 12:51:27.051077 containerd[1702]: time="2025-01-15T12:51:27.051033131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-5f2nt,Uid:25df612d-7c87-41cf-a49d-92de2f6930d0,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:51:27.051467 systemd[1]: run-netns-cni\x2d770bdfa7\x2dea43\x2d3cfb\x2decd4\x2d3d966ee443e1.mount: Deactivated successfully. Jan 15 12:51:27.222299 systemd-networkd[1331]: cali6eb8f35cf84: Link UP Jan 15 12:51:27.222499 systemd-networkd[1331]: cali6eb8f35cf84: Gained carrier Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.106 [INFO][4588] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.119 [INFO][4588] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0 calico-apiserver-599d8cb64- calico-apiserver 25df612d-7c87-41cf-a49d-92de2f6930d0 770 0 2025-01-15 12:51:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599d8cb64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-f89ceb891c calico-apiserver-599d8cb64-5f2nt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6eb8f35cf84 [] []}} ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.119 [INFO][4588] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.143 [INFO][4598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" HandleID="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.154 [INFO][4598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" HandleID="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-f89ceb891c", "pod":"calico-apiserver-599d8cb64-5f2nt", "timestamp":"2025-01-15 12:51:27.143694962 +0000 UTC"}, Hostname:"ci-4081.3.0-a-f89ceb891c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.154 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.154 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.154 [INFO][4598] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-f89ceb891c' Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.156 [INFO][4598] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.159 [INFO][4598] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.163 [INFO][4598] ipam/ipam.go 489: Trying affinity for 192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.165 [INFO][4598] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.167 [INFO][4598] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.167 [INFO][4598] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.169 [INFO][4598] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.177 [INFO][4598] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.186 [INFO][4598] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.193/26] block=192.168.82.192/26 handle="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.186 [INFO][4598] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.193/26] handle="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.186 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:27.256655 containerd[1702]: 2025-01-15 12:51:27.186 [INFO][4598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.193/26] IPv6=[] ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" HandleID="k8s-pod-network.d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.257963 containerd[1702]: 2025-01-15 12:51:27.189 [INFO][4588] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"25df612d-7c87-41cf-a49d-92de2f6930d0", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"", Pod:"calico-apiserver-599d8cb64-5f2nt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6eb8f35cf84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:27.257963 containerd[1702]: 2025-01-15 12:51:27.190 [INFO][4588] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.193/32] ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.257963 containerd[1702]: 2025-01-15 12:51:27.190 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6eb8f35cf84 ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.257963 containerd[1702]: 2025-01-15 12:51:27.222 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.257963 containerd[1702]: 2025-01-15 12:51:27.224 [INFO][4588] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"25df612d-7c87-41cf-a49d-92de2f6930d0", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e", Pod:"calico-apiserver-599d8cb64-5f2nt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6eb8f35cf84", MAC:"c2:f3:be:1d:7b:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:27.257963 containerd[1702]: 2025-01-15 12:51:27.251 [INFO][4588] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-5f2nt" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:27.285086 containerd[1702]: time="2025-01-15T12:51:27.284769851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:27.285086 containerd[1702]: time="2025-01-15T12:51:27.284822811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:27.285086 containerd[1702]: time="2025-01-15T12:51:27.284834011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:27.285086 containerd[1702]: time="2025-01-15T12:51:27.284909171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:27.307915 systemd[1]: Started cri-containerd-d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e.scope - libcontainer container d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e. Jan 15 12:51:27.336574 containerd[1702]: time="2025-01-15T12:51:27.336522873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-5f2nt,Uid:25df612d-7c87-41cf-a49d-92de2f6930d0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e\"" Jan 15 12:51:27.338449 containerd[1702]: time="2025-01-15T12:51:27.338411995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 12:51:27.955005 containerd[1702]: time="2025-01-15T12:51:27.954947175Z" level=info msg="StopPodSandbox for \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\"" Jan 15 12:51:27.955357 containerd[1702]: time="2025-01-15T12:51:27.955340136Z" level=info msg="StopPodSandbox for \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\"" Jan 15 12:51:28.049340 systemd[1]: run-containerd-runc-k8s.io-d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e-runc.FSaPci.mount: Deactivated successfully. Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.014 [INFO][4706] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.015 [INFO][4706] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" iface="eth0" netns="/var/run/netns/cni-d5d3bb01-66a9-aa60-ade4-c5e6ba034360" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.016 [INFO][4706] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" iface="eth0" netns="/var/run/netns/cni-d5d3bb01-66a9-aa60-ade4-c5e6ba034360" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.017 [INFO][4706] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" iface="eth0" netns="/var/run/netns/cni-d5d3bb01-66a9-aa60-ade4-c5e6ba034360" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.017 [INFO][4706] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.017 [INFO][4706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.040 [INFO][4718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.040 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.040 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.051 [WARNING][4718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.051 [INFO][4718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.052 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:28.055126 containerd[1702]: 2025-01-15 12:51:28.054 [INFO][4706] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:28.057568 systemd[1]: run-netns-cni\x2dd5d3bb01\x2d66a9\x2daa60\x2dade4\x2dc5e6ba034360.mount: Deactivated successfully. Jan 15 12:51:28.058124 containerd[1702]: time="2025-01-15T12:51:28.057618419Z" level=info msg="TearDown network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\" successfully" Jan 15 12:51:28.058124 containerd[1702]: time="2025-01-15T12:51:28.057646939Z" level=info msg="StopPodSandbox for \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\" returns successfully" Jan 15 12:51:28.059570 containerd[1702]: time="2025-01-15T12:51:28.059543141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw7xv,Uid:b15694bf-ebfd-4bd2-8276-f46e85d79323,Namespace:calico-system,Attempt:1,}" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.021 [INFO][4705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.021 [INFO][4705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" iface="eth0" netns="/var/run/netns/cni-a9d22386-6ae1-f9b4-8623-6e0a655b4174" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.021 [INFO][4705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" iface="eth0" netns="/var/run/netns/cni-a9d22386-6ae1-f9b4-8623-6e0a655b4174" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.021 [INFO][4705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" iface="eth0" netns="/var/run/netns/cni-a9d22386-6ae1-f9b4-8623-6e0a655b4174" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.021 [INFO][4705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.021 [INFO][4705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.050 [INFO][4722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.050 [INFO][4722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.052 [INFO][4722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.064 [WARNING][4722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.064 [INFO][4722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.066 [INFO][4722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:28.069323 containerd[1702]: 2025-01-15 12:51:28.067 [INFO][4705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:28.070696 containerd[1702]: time="2025-01-15T12:51:28.070161954Z" level=info msg="TearDown network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\" successfully" Jan 15 12:51:28.070696 containerd[1702]: time="2025-01-15T12:51:28.070189714Z" level=info msg="StopPodSandbox for \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\" returns successfully" Jan 15 12:51:28.072524 containerd[1702]: time="2025-01-15T12:51:28.070907954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9b4kr,Uid:c4e3b7d0-351b-4511-9466-e0a6b59b0959,Namespace:kube-system,Attempt:1,}" Jan 15 12:51:28.073078 systemd[1]: run-netns-cni\x2da9d22386\x2d6ae1\x2df9b4\x2d8623\x2d6e0a655b4174.mount: Deactivated successfully. Jan 15 12:51:28.208334 kubelet[3253]: I0115 12:51:28.208030 3253 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:28.240460 systemd-networkd[1331]: calid6404326b95: Link UP Jan 15 12:51:28.240748 systemd-networkd[1331]: calid6404326b95: Gained carrier Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.127 [INFO][4730] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.146 [INFO][4730] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0 csi-node-driver- calico-system b15694bf-ebfd-4bd2-8276-f46e85d79323 780 0 2025-01-15 12:51:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-f89ceb891c csi-node-driver-kw7xv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid6404326b95 [] []}} ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.146 [INFO][4730] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.186 [INFO][4752] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" HandleID="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.197 [INFO][4752] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" HandleID="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7500), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-f89ceb891c", "pod":"csi-node-driver-kw7xv", "timestamp":"2025-01-15 12:51:28.186259573 +0000 UTC"}, Hostname:"ci-4081.3.0-a-f89ceb891c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.197 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.197 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.197 [INFO][4752] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-f89ceb891c' Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.200 [INFO][4752] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.205 [INFO][4752] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.210 [INFO][4752] ipam/ipam.go 489: Trying affinity for 192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.212 [INFO][4752] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.216 [INFO][4752] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.216 [INFO][4752] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.217 [INFO][4752] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.222 [INFO][4752] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.231 [INFO][4752] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.194/26] block=192.168.82.192/26 handle="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.232 [INFO][4752] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.194/26] handle="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.232 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:28.260496 containerd[1702]: 2025-01-15 12:51:28.232 [INFO][4752] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.194/26] IPv6=[] ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" HandleID="k8s-pod-network.1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.261908 containerd[1702]: 2025-01-15 12:51:28.235 [INFO][4730] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b15694bf-ebfd-4bd2-8276-f46e85d79323", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"", Pod:"csi-node-driver-kw7xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6404326b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:28.261908 containerd[1702]: 2025-01-15 12:51:28.235 [INFO][4730] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.194/32] ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.261908 containerd[1702]: 2025-01-15 12:51:28.235 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6404326b95 ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.261908 containerd[1702]: 2025-01-15 12:51:28.241 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.261908 containerd[1702]: 2025-01-15 12:51:28.243 [INFO][4730] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b15694bf-ebfd-4bd2-8276-f46e85d79323", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f", Pod:"csi-node-driver-kw7xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6404326b95", MAC:"7a:3e:7c:a8:41:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:28.261908 containerd[1702]: 2025-01-15 12:51:28.258 [INFO][4730] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f" Namespace="calico-system" Pod="csi-node-driver-kw7xv" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:28.302196 containerd[1702]: time="2025-01-15T12:51:28.301895392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:28.302196 containerd[1702]: time="2025-01-15T12:51:28.302003952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:28.302196 containerd[1702]: time="2025-01-15T12:51:28.302033832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:28.302802 containerd[1702]: time="2025-01-15T12:51:28.302179472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:28.307128 systemd-networkd[1331]: cali1b778bc43b6: Link UP Jan 15 12:51:28.310081 systemd-networkd[1331]: cali1b778bc43b6: Gained carrier Jan 15 12:51:28.322911 systemd[1]: Started cri-containerd-1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f.scope - libcontainer container 1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f. Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.146 [INFO][4740] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.160 [INFO][4740] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0 coredns-7db6d8ff4d- kube-system c4e3b7d0-351b-4511-9466-e0a6b59b0959 781 0 2025-01-15 12:50:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-f89ceb891c coredns-7db6d8ff4d-9b4kr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b778bc43b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.160 [INFO][4740] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.192 [INFO][4756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" HandleID="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.204 [INFO][4756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" HandleID="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316e70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-f89ceb891c", "pod":"coredns-7db6d8ff4d-9b4kr", "timestamp":"2025-01-15 12:51:28.19200354 +0000 UTC"}, Hostname:"ci-4081.3.0-a-f89ceb891c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.204 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.232 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.232 [INFO][4756] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-f89ceb891c' Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.241 [INFO][4756] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.262 [INFO][4756] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.272 [INFO][4756] ipam/ipam.go 489: Trying affinity for 192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.274 [INFO][4756] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.277 [INFO][4756] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.277 [INFO][4756] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.279 [INFO][4756] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719 Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.286 [INFO][4756] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.295 [INFO][4756] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.195/26] block=192.168.82.192/26 handle="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.296 [INFO][4756] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.195/26] handle="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.296 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:28.332824 containerd[1702]: 2025-01-15 12:51:28.296 [INFO][4756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.195/26] IPv6=[] ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" HandleID="k8s-pod-network.d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.333404 containerd[1702]: 2025-01-15 12:51:28.298 [INFO][4740] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e3b7d0-351b-4511-9466-e0a6b59b0959", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"", Pod:"coredns-7db6d8ff4d-9b4kr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b778bc43b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:28.333404 containerd[1702]: 2025-01-15 12:51:28.298 [INFO][4740] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.195/32] ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.333404 containerd[1702]: 2025-01-15 12:51:28.298 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b778bc43b6 ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.333404 containerd[1702]: 2025-01-15 12:51:28.315 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.333404 containerd[1702]: 2025-01-15 12:51:28.315 [INFO][4740] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e3b7d0-351b-4511-9466-e0a6b59b0959", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719", Pod:"coredns-7db6d8ff4d-9b4kr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b778bc43b6", MAC:"16:65:22:7e:05:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:28.333404 containerd[1702]: 2025-01-15 12:51:28.329 [INFO][4740] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9b4kr" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:28.352932 containerd[1702]: time="2025-01-15T12:51:28.352814053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw7xv,Uid:b15694bf-ebfd-4bd2-8276-f46e85d79323,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f\"" Jan 15 12:51:28.371153 containerd[1702]: time="2025-01-15T12:51:28.370989995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:28.371417 containerd[1702]: time="2025-01-15T12:51:28.371128635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:28.371417 containerd[1702]: time="2025-01-15T12:51:28.371271235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:28.371881 containerd[1702]: time="2025-01-15T12:51:28.371822276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:28.386879 systemd[1]: Started cri-containerd-d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719.scope - libcontainer container d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719. Jan 15 12:51:28.415363 containerd[1702]: time="2025-01-15T12:51:28.415329888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9b4kr,Uid:c4e3b7d0-351b-4511-9466-e0a6b59b0959,Namespace:kube-system,Attempt:1,} returns sandbox id \"d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719\"" Jan 15 12:51:28.417634 kubelet[3253]: I0115 12:51:28.417606 3253 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:28.421509 containerd[1702]: time="2025-01-15T12:51:28.421479095Z" level=info msg="CreateContainer within sandbox \"d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:51:28.468751 containerd[1702]: time="2025-01-15T12:51:28.468492832Z" level=info msg="CreateContainer within sandbox \"d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f39d678400ce75271484b6135346b3fa2b6fd80886e1105e63c269d304f8c1d3\"" Jan 15 12:51:28.469345 containerd[1702]: time="2025-01-15T12:51:28.469310233Z" level=info msg="StartContainer for \"f39d678400ce75271484b6135346b3fa2b6fd80886e1105e63c269d304f8c1d3\"" Jan 15 12:51:28.494894 systemd[1]: Started cri-containerd-f39d678400ce75271484b6135346b3fa2b6fd80886e1105e63c269d304f8c1d3.scope - libcontainer container f39d678400ce75271484b6135346b3fa2b6fd80886e1105e63c269d304f8c1d3. Jan 15 12:51:28.583468 containerd[1702]: time="2025-01-15T12:51:28.583420810Z" level=info msg="StartContainer for \"f39d678400ce75271484b6135346b3fa2b6fd80886e1105e63c269d304f8c1d3\" returns successfully" Jan 15 12:51:28.705788 kernel: bpftool[4967]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 15 12:51:28.956915 systemd-networkd[1331]: cali6eb8f35cf84: Gained IPv6LL Jan 15 12:51:29.021257 systemd-networkd[1331]: vxlan.calico: Link UP Jan 15 12:51:29.021265 systemd-networkd[1331]: vxlan.calico: Gained carrier Jan 15 12:51:29.226831 kubelet[3253]: I0115 12:51:29.225152 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9b4kr" podStartSLOduration=35.22513306 podStartE2EDuration="35.22513306s" podCreationTimestamp="2025-01-15 12:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:29.206079957 +0000 UTC m=+49.371563307" watchObservedRunningTime="2025-01-15 12:51:29.22513306 +0000 UTC m=+49.390616410" Jan 15 12:51:29.404230 systemd-networkd[1331]: cali1b778bc43b6: Gained IPv6LL Jan 15 12:51:29.405026 systemd-networkd[1331]: calid6404326b95: Gained IPv6LL Jan 15 12:51:29.958451 containerd[1702]: time="2025-01-15T12:51:29.958156139Z" level=info msg="StopPodSandbox for \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\"" Jan 15 12:51:29.964951 containerd[1702]: time="2025-01-15T12:51:29.964916627Z" level=info msg="StopPodSandbox for \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\"" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.035 [INFO][5104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.035 [INFO][5104] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" iface="eth0" netns="/var/run/netns/cni-ca45ed81-4108-6c4f-d294-15f662a30133" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.037 [INFO][5104] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" iface="eth0" netns="/var/run/netns/cni-ca45ed81-4108-6c4f-d294-15f662a30133" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.037 [INFO][5104] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" iface="eth0" netns="/var/run/netns/cni-ca45ed81-4108-6c4f-d294-15f662a30133" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.037 [INFO][5104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.037 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.076 [INFO][5122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.076 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.076 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.084 [WARNING][5122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.084 [INFO][5122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.086 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.089318 containerd[1702]: 2025-01-15 12:51:30.088 [INFO][5104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:30.089961 containerd[1702]: time="2025-01-15T12:51:30.089897937Z" level=info msg="TearDown network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\" successfully" Jan 15 12:51:30.090051 containerd[1702]: time="2025-01-15T12:51:30.090037258Z" level=info msg="StopPodSandbox for \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\" returns successfully" Jan 15 12:51:30.092317 systemd[1]: run-netns-cni\x2dca45ed81\x2d4108\x2d6c4f\x2dd294\x2d15f662a30133.mount: Deactivated successfully. Jan 15 12:51:30.095510 containerd[1702]: time="2025-01-15T12:51:30.095292984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c6b9d78c-8b976,Uid:39441957-8677-4d6c-967c-9eea47f5a2b2,Namespace:calico-system,Attempt:1,}" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.040 [INFO][5112] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.041 [INFO][5112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" iface="eth0" netns="/var/run/netns/cni-4c9985dc-c146-ca46-52bb-9e8a4c2c7c2c" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.041 [INFO][5112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" iface="eth0" netns="/var/run/netns/cni-4c9985dc-c146-ca46-52bb-9e8a4c2c7c2c" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.041 [INFO][5112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" iface="eth0" netns="/var/run/netns/cni-4c9985dc-c146-ca46-52bb-9e8a4c2c7c2c" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.041 [INFO][5112] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.041 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.077 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.078 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.086 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.098 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.098 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.101 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.104219 containerd[1702]: 2025-01-15 12:51:30.102 [INFO][5112] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:30.104579 containerd[1702]: time="2025-01-15T12:51:30.104348995Z" level=info msg="TearDown network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\" successfully" Jan 15 12:51:30.104579 containerd[1702]: time="2025-01-15T12:51:30.104371155Z" level=info msg="StopPodSandbox for \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\" returns successfully" Jan 15 12:51:30.106751 containerd[1702]: time="2025-01-15T12:51:30.105040436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-xk5xp,Uid:14500a27-f3f6-499b-94fc-b167797dfc9f,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:51:30.106731 systemd[1]: run-netns-cni\x2d4c9985dc\x2dc146\x2dca46\x2d52bb\x2d9e8a4c2c7c2c.mount: Deactivated successfully. Jan 15 12:51:30.363957 systemd-networkd[1331]: vxlan.calico: Gained IPv6LL Jan 15 12:51:30.406534 systemd-networkd[1331]: cali389aa636f46: Link UP Jan 15 12:51:30.407478 systemd-networkd[1331]: cali389aa636f46: Gained carrier Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.279 [INFO][5139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0 calico-apiserver-599d8cb64- calico-apiserver 14500a27-f3f6-499b-94fc-b167797dfc9f 816 0 2025-01-15 12:51:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599d8cb64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-f89ceb891c calico-apiserver-599d8cb64-xk5xp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali389aa636f46 [] []}} ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.279 [INFO][5139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.330 [INFO][5162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" HandleID="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.351 [INFO][5162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" HandleID="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-f89ceb891c", "pod":"calico-apiserver-599d8cb64-xk5xp", "timestamp":"2025-01-15 12:51:30.330064306 +0000 UTC"}, Hostname:"ci-4081.3.0-a-f89ceb891c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.351 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.351 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.351 [INFO][5162] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-f89ceb891c' Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.355 [INFO][5162] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.361 [INFO][5162] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.369 [INFO][5162] ipam/ipam.go 489: Trying affinity for 192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.373 [INFO][5162] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.379 [INFO][5162] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.379 [INFO][5162] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.381 [INFO][5162] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8 Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.387 [INFO][5162] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.399 [INFO][5162] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.196/26] block=192.168.82.192/26 handle="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.399 [INFO][5162] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.196/26] handle="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.399 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.441706 containerd[1702]: 2025-01-15 12:51:30.399 [INFO][5162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.196/26] IPv6=[] ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" HandleID="k8s-pod-network.49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.443166 containerd[1702]: 2025-01-15 12:51:30.402 [INFO][5139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"14500a27-f3f6-499b-94fc-b167797dfc9f", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"", Pod:"calico-apiserver-599d8cb64-xk5xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali389aa636f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.443166 containerd[1702]: 2025-01-15 12:51:30.402 [INFO][5139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.196/32] ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.443166 containerd[1702]: 2025-01-15 12:51:30.402 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali389aa636f46 ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.443166 containerd[1702]: 2025-01-15 12:51:30.407 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.443166 containerd[1702]: 2025-01-15 12:51:30.408 [INFO][5139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"14500a27-f3f6-499b-94fc-b167797dfc9f", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8", Pod:"calico-apiserver-599d8cb64-xk5xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali389aa636f46", MAC:"de:06:6b:5b:7c:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.443166 containerd[1702]: 2025-01-15 12:51:30.427 [INFO][5139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8" Namespace="calico-apiserver" Pod="calico-apiserver-599d8cb64-xk5xp" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:30.491070 systemd-networkd[1331]: calica23d9d760f: Link UP Jan 15 12:51:30.493407 systemd-networkd[1331]: calica23d9d760f: Gained carrier Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.282 [INFO][5149] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0 calico-kube-controllers-67c6b9d78c- calico-system 39441957-8677-4d6c-967c-9eea47f5a2b2 815 0 2025-01-15 12:51:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67c6b9d78c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-f89ceb891c calico-kube-controllers-67c6b9d78c-8b976 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calica23d9d760f [] []}} ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.282 [INFO][5149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.337 [INFO][5166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" HandleID="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.354 [INFO][5166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" HandleID="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004caa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-f89ceb891c", "pod":"calico-kube-controllers-67c6b9d78c-8b976", "timestamp":"2025-01-15 12:51:30.337112594 +0000 UTC"}, Hostname:"ci-4081.3.0-a-f89ceb891c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.354 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.400 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.401 [INFO][5166] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-f89ceb891c' Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.407 [INFO][5166] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.422 [INFO][5166] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.440 [INFO][5166] ipam/ipam.go 489: Trying affinity for 192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.446 [INFO][5166] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.451 [INFO][5166] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.451 [INFO][5166] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.454 [INFO][5166] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446 Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.464 [INFO][5166] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.480 [INFO][5166] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.197/26] block=192.168.82.192/26 handle="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.481 [INFO][5166] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.197/26] handle="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.481 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.514503 containerd[1702]: 2025-01-15 12:51:30.481 [INFO][5166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.197/26] IPv6=[] ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" HandleID="k8s-pod-network.4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.532019 containerd[1702]: 2025-01-15 12:51:30.486 [INFO][5149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0", GenerateName:"calico-kube-controllers-67c6b9d78c-", Namespace:"calico-system", SelfLink:"", UID:"39441957-8677-4d6c-967c-9eea47f5a2b2", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c6b9d78c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"", Pod:"calico-kube-controllers-67c6b9d78c-8b976", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica23d9d760f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.532019 containerd[1702]: 2025-01-15 12:51:30.487 [INFO][5149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.197/32] ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.532019 containerd[1702]: 2025-01-15 12:51:30.487 [INFO][5149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica23d9d760f ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.532019 containerd[1702]: 2025-01-15 12:51:30.494 [INFO][5149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.532019 containerd[1702]: 2025-01-15 12:51:30.494 [INFO][5149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0", GenerateName:"calico-kube-controllers-67c6b9d78c-", Namespace:"calico-system", SelfLink:"", UID:"39441957-8677-4d6c-967c-9eea47f5a2b2", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c6b9d78c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446", Pod:"calico-kube-controllers-67c6b9d78c-8b976", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica23d9d760f", MAC:"b6:49:b1:9e:62:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.532019 containerd[1702]: 2025-01-15 12:51:30.510 [INFO][5149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446" Namespace="calico-system" Pod="calico-kube-controllers-67c6b9d78c-8b976" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:30.601276 containerd[1702]: time="2025-01-15T12:51:30.601126911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:30.601276 containerd[1702]: time="2025-01-15T12:51:30.601232911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:30.601276 containerd[1702]: time="2025-01-15T12:51:30.601244111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.601813 containerd[1702]: time="2025-01-15T12:51:30.601407071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.625676 systemd[1]: Started cri-containerd-49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8.scope - libcontainer container 49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8. Jan 15 12:51:30.669604 containerd[1702]: time="2025-01-15T12:51:30.669557073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599d8cb64-xk5xp,Uid:14500a27-f3f6-499b-94fc-b167797dfc9f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8\"" Jan 15 12:51:30.696540 containerd[1702]: time="2025-01-15T12:51:30.696375345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:30.696540 containerd[1702]: time="2025-01-15T12:51:30.696423265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:30.696540 containerd[1702]: time="2025-01-15T12:51:30.696433385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.697357 containerd[1702]: time="2025-01-15T12:51:30.696503145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.716577 systemd[1]: Started cri-containerd-4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446.scope - libcontainer container 4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446. Jan 15 12:51:30.763346 containerd[1702]: time="2025-01-15T12:51:30.763301466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c6b9d78c-8b976,Uid:39441957-8677-4d6c-967c-9eea47f5a2b2,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446\"" Jan 15 12:51:30.954383 containerd[1702]: time="2025-01-15T12:51:30.953950174Z" level=info msg="StopPodSandbox for \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\"" Jan 15 12:51:31.028768 containerd[1702]: time="2025-01-15T12:51:31.028542224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:31.032186 containerd[1702]: time="2025-01-15T12:51:31.032143948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 15 12:51:31.037067 containerd[1702]: time="2025-01-15T12:51:31.037009394Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:31.042262 containerd[1702]: time="2025-01-15T12:51:31.042213720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:31.044056 containerd[1702]: time="2025-01-15T12:51:31.044018443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.705567007s" Jan 15 12:51:31.044259 containerd[1702]: time="2025-01-15T12:51:31.044071283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 15 12:51:31.046763 containerd[1702]: time="2025-01-15T12:51:31.046149285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 15 12:51:31.047588 containerd[1702]: time="2025-01-15T12:51:31.047542407Z" level=info msg="CreateContainer within sandbox \"d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.012 [INFO][5296] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.012 [INFO][5296] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" iface="eth0" netns="/var/run/netns/cni-f8fb73f0-44cf-24d0-910b-7523976b923b" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.012 [INFO][5296] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" iface="eth0" netns="/var/run/netns/cni-f8fb73f0-44cf-24d0-910b-7523976b923b" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.012 [INFO][5296] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" iface="eth0" netns="/var/run/netns/cni-f8fb73f0-44cf-24d0-910b-7523976b923b" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.013 [INFO][5296] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.013 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.034 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.034 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.034 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.046 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.046 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.049 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:31.068340 containerd[1702]: 2025-01-15 12:51:31.050 [INFO][5296] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:31.072432 containerd[1702]: time="2025-01-15T12:51:31.068449552Z" level=info msg="TearDown network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\" successfully" Jan 15 12:51:31.072432 containerd[1702]: time="2025-01-15T12:51:31.068475392Z" level=info msg="StopPodSandbox for \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\" returns successfully" Jan 15 12:51:31.072432 containerd[1702]: time="2025-01-15T12:51:31.069227953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nh4f9,Uid:f965f94f-e6f0-4cbe-9d52-87cecc0ddc61,Namespace:kube-system,Attempt:1,}" Jan 15 12:51:31.094960 systemd[1]: run-netns-cni\x2df8fb73f0\x2d44cf\x2d24d0\x2d910b\x2d7523976b923b.mount: Deactivated successfully. Jan 15 12:51:31.106148 containerd[1702]: time="2025-01-15T12:51:31.106077357Z" level=info msg="CreateContainer within sandbox \"d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5efe1f9a675220c6de123b0db676bb20e5a1614a1369774fc28ad1d273a9366c\"" Jan 15 12:51:31.108180 containerd[1702]: time="2025-01-15T12:51:31.106972398Z" level=info msg="StartContainer for \"5efe1f9a675220c6de123b0db676bb20e5a1614a1369774fc28ad1d273a9366c\"" Jan 15 12:51:31.147912 systemd[1]: Started cri-containerd-5efe1f9a675220c6de123b0db676bb20e5a1614a1369774fc28ad1d273a9366c.scope - libcontainer container 5efe1f9a675220c6de123b0db676bb20e5a1614a1369774fc28ad1d273a9366c. Jan 15 12:51:31.193994 containerd[1702]: time="2025-01-15T12:51:31.193839902Z" level=info msg="StartContainer for \"5efe1f9a675220c6de123b0db676bb20e5a1614a1369774fc28ad1d273a9366c\" returns successfully" Jan 15 12:51:31.227412 kubelet[3253]: I0115 12:51:31.227265 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-599d8cb64-5f2nt" podStartSLOduration=27.520164893 podStartE2EDuration="31.227247142s" podCreationTimestamp="2025-01-15 12:51:00 +0000 UTC" firstStartedPulling="2025-01-15 12:51:27.338088715 +0000 UTC m=+47.503572065" lastFinishedPulling="2025-01-15 12:51:31.045170964 +0000 UTC m=+51.210654314" observedRunningTime="2025-01-15 12:51:31.226861902 +0000 UTC m=+51.392345212" watchObservedRunningTime="2025-01-15 12:51:31.227247142 +0000 UTC m=+51.392730492" Jan 15 12:51:31.289056 systemd-networkd[1331]: cali74af2bca0a6: Link UP Jan 15 12:51:31.289249 systemd-networkd[1331]: cali74af2bca0a6: Gained carrier Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.179 [INFO][5322] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0 coredns-7db6d8ff4d- kube-system f965f94f-e6f0-4cbe-9d52-87cecc0ddc61 826 0 2025-01-15 12:50:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-f89ceb891c coredns-7db6d8ff4d-nh4f9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali74af2bca0a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.179 [INFO][5322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.224 [INFO][5354] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" HandleID="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.245 [INFO][5354] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" HandleID="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f81b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-f89ceb891c", "pod":"coredns-7db6d8ff4d-nh4f9", "timestamp":"2025-01-15 12:51:31.224413339 +0000 UTC"}, Hostname:"ci-4081.3.0-a-f89ceb891c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.245 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.245 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.245 [INFO][5354] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-f89ceb891c' Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.248 [INFO][5354] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.252 [INFO][5354] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.258 [INFO][5354] ipam/ipam.go 489: Trying affinity for 192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.260 [INFO][5354] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.263 [INFO][5354] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.263 [INFO][5354] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.264 [INFO][5354] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.270 [INFO][5354] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.281 [INFO][5354] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.198/26] block=192.168.82.192/26 handle="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.281 [INFO][5354] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.198/26] handle="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" host="ci-4081.3.0-a-f89ceb891c" Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.281 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:31.313495 containerd[1702]: 2025-01-15 12:51:31.281 [INFO][5354] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.198/26] IPv6=[] ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" HandleID="k8s-pod-network.35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.315027 containerd[1702]: 2025-01-15 12:51:31.283 [INFO][5322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"", Pod:"coredns-7db6d8ff4d-nh4f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74af2bca0a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:31.315027 containerd[1702]: 2025-01-15 12:51:31.284 [INFO][5322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.198/32] ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.315027 containerd[1702]: 2025-01-15 12:51:31.284 [INFO][5322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74af2bca0a6 ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.315027 containerd[1702]: 2025-01-15 12:51:31.288 [INFO][5322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.315027 containerd[1702]: 2025-01-15 12:51:31.290 [INFO][5322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad", Pod:"coredns-7db6d8ff4d-nh4f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74af2bca0a6", MAC:"0e:21:61:1d:bc:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:31.315027 containerd[1702]: 2025-01-15 12:51:31.309 [INFO][5322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nh4f9" WorkloadEndpoint="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:31.358810 containerd[1702]: time="2025-01-15T12:51:31.358393980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:31.358969 containerd[1702]: time="2025-01-15T12:51:31.358773180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:31.358969 containerd[1702]: time="2025-01-15T12:51:31.358820380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:31.358969 containerd[1702]: time="2025-01-15T12:51:31.358917580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:31.382335 systemd[1]: Started cri-containerd-35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad.scope - libcontainer container 35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad. Jan 15 12:51:31.432939 containerd[1702]: time="2025-01-15T12:51:31.432705109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nh4f9,Uid:f965f94f-e6f0-4cbe-9d52-87cecc0ddc61,Namespace:kube-system,Attempt:1,} returns sandbox id \"35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad\"" Jan 15 12:51:31.438590 containerd[1702]: time="2025-01-15T12:51:31.438546716Z" level=info msg="CreateContainer within sandbox \"35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:51:31.452920 systemd-networkd[1331]: cali389aa636f46: Gained IPv6LL Jan 15 12:51:31.472048 containerd[1702]: time="2025-01-15T12:51:31.471997476Z" level=info msg="CreateContainer within sandbox \"35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20cd538cd8a397b8ec7a40636ec0a3f1be2d6c2058e65daf0640fbab51c0e5a5\"" Jan 15 12:51:31.474087 containerd[1702]: time="2025-01-15T12:51:31.473235038Z" level=info msg="StartContainer for \"20cd538cd8a397b8ec7a40636ec0a3f1be2d6c2058e65daf0640fbab51c0e5a5\"" Jan 15 12:51:31.506013 systemd[1]: Started cri-containerd-20cd538cd8a397b8ec7a40636ec0a3f1be2d6c2058e65daf0640fbab51c0e5a5.scope - libcontainer container 20cd538cd8a397b8ec7a40636ec0a3f1be2d6c2058e65daf0640fbab51c0e5a5. Jan 15 12:51:31.544481 containerd[1702]: time="2025-01-15T12:51:31.544430843Z" level=info msg="StartContainer for \"20cd538cd8a397b8ec7a40636ec0a3f1be2d6c2058e65daf0640fbab51c0e5a5\" returns successfully" Jan 15 12:51:31.963948 systemd-networkd[1331]: calica23d9d760f: Gained IPv6LL Jan 15 12:51:32.217117 kubelet[3253]: I0115 12:51:32.216459 3253 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:32.259686 kubelet[3253]: I0115 12:51:32.259630 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nh4f9" podStartSLOduration=38.259515501 podStartE2EDuration="38.259515501s" podCreationTimestamp="2025-01-15 12:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:32.238515916 +0000 UTC m=+52.403999266" watchObservedRunningTime="2025-01-15 12:51:32.259515501 +0000 UTC m=+52.424998851" Jan 15 12:51:32.348209 systemd-networkd[1331]: cali74af2bca0a6: Gained IPv6LL Jan 15 12:51:33.059626 containerd[1702]: time="2025-01-15T12:51:33.059550131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:33.062318 containerd[1702]: time="2025-01-15T12:51:33.062261655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 15 12:51:33.067912 containerd[1702]: time="2025-01-15T12:51:33.067869381Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:33.072521 containerd[1702]: time="2025-01-15T12:51:33.072477907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:33.075093 containerd[1702]: time="2025-01-15T12:51:33.075040030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.028856545s" Jan 15 12:51:33.075093 containerd[1702]: time="2025-01-15T12:51:33.075088230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 15 12:51:33.078083 containerd[1702]: time="2025-01-15T12:51:33.077877353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 12:51:33.079894 containerd[1702]: time="2025-01-15T12:51:33.079861715Z" level=info msg="CreateContainer within sandbox \"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 15 12:51:33.116001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106358392.mount: Deactivated successfully. Jan 15 12:51:33.127578 containerd[1702]: time="2025-01-15T12:51:33.127521892Z" level=info msg="CreateContainer within sandbox \"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b90fd47b0987d574082e3965aecf7e6faa77304e4e1771d26cb6bf5948bd670a\"" Jan 15 12:51:33.128447 containerd[1702]: time="2025-01-15T12:51:33.128397853Z" level=info msg="StartContainer for \"b90fd47b0987d574082e3965aecf7e6faa77304e4e1771d26cb6bf5948bd670a\"" Jan 15 12:51:33.167105 systemd[1]: Started cri-containerd-b90fd47b0987d574082e3965aecf7e6faa77304e4e1771d26cb6bf5948bd670a.scope - libcontainer container b90fd47b0987d574082e3965aecf7e6faa77304e4e1771d26cb6bf5948bd670a. Jan 15 12:51:33.199866 containerd[1702]: time="2025-01-15T12:51:33.199822457Z" level=info msg="StartContainer for \"b90fd47b0987d574082e3965aecf7e6faa77304e4e1771d26cb6bf5948bd670a\" returns successfully" Jan 15 12:51:33.407737 containerd[1702]: time="2025-01-15T12:51:33.407664183Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:33.410541 containerd[1702]: time="2025-01-15T12:51:33.410509386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 15 12:51:33.412448 containerd[1702]: time="2025-01-15T12:51:33.412358348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 334.444195ms" Jan 15 12:51:33.412448 containerd[1702]: time="2025-01-15T12:51:33.412393468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 15 12:51:33.415283 containerd[1702]: time="2025-01-15T12:51:33.415029911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 15 12:51:33.416096 containerd[1702]: time="2025-01-15T12:51:33.416020233Z" level=info msg="CreateContainer within sandbox \"49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:51:33.468231 containerd[1702]: time="2025-01-15T12:51:33.468182294Z" level=info msg="CreateContainer within sandbox \"49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f187cad5d6392789cc739429d198fd1d529511cfee2847b58cc9fbabf5201c26\"" Jan 15 12:51:33.468851 containerd[1702]: time="2025-01-15T12:51:33.468800735Z" level=info msg="StartContainer for \"f187cad5d6392789cc739429d198fd1d529511cfee2847b58cc9fbabf5201c26\"" Jan 15 12:51:33.490892 systemd[1]: Started cri-containerd-f187cad5d6392789cc739429d198fd1d529511cfee2847b58cc9fbabf5201c26.scope - libcontainer container f187cad5d6392789cc739429d198fd1d529511cfee2847b58cc9fbabf5201c26. Jan 15 12:51:33.523931 containerd[1702]: time="2025-01-15T12:51:33.523435279Z" level=info msg="StartContainer for \"f187cad5d6392789cc739429d198fd1d529511cfee2847b58cc9fbabf5201c26\" returns successfully" Jan 15 12:51:34.107107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485119032.mount: Deactivated successfully. Jan 15 12:51:35.225093 kubelet[3253]: I0115 12:51:35.224505 3253 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:36.482034 containerd[1702]: time="2025-01-15T12:51:36.481972455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:36.484191 containerd[1702]: time="2025-01-15T12:51:36.484037218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 15 12:51:36.488170 containerd[1702]: time="2025-01-15T12:51:36.487088101Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:36.491527 containerd[1702]: time="2025-01-15T12:51:36.491474547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:36.492353 containerd[1702]: time="2025-01-15T12:51:36.492273387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.077207556s" Jan 15 12:51:36.492457 containerd[1702]: time="2025-01-15T12:51:36.492440028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 15 12:51:36.493927 containerd[1702]: time="2025-01-15T12:51:36.493902949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 15 12:51:36.509859 containerd[1702]: time="2025-01-15T12:51:36.508923607Z" level=info msg="CreateContainer within sandbox \"4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 15 12:51:36.544260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1360230500.mount: Deactivated successfully. Jan 15 12:51:36.557749 containerd[1702]: time="2025-01-15T12:51:36.555949583Z" level=info msg="CreateContainer within sandbox \"4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fd0b0c3c1c551a88c16f768e7ca1126ff9825677242f90ded778caed28d4c846\"" Jan 15 12:51:36.558270 containerd[1702]: time="2025-01-15T12:51:36.558217665Z" level=info msg="StartContainer for \"fd0b0c3c1c551a88c16f768e7ca1126ff9825677242f90ded778caed28d4c846\"" Jan 15 12:51:36.592906 systemd[1]: Started cri-containerd-fd0b0c3c1c551a88c16f768e7ca1126ff9825677242f90ded778caed28d4c846.scope - libcontainer container fd0b0c3c1c551a88c16f768e7ca1126ff9825677242f90ded778caed28d4c846. Jan 15 12:51:36.639755 containerd[1702]: time="2025-01-15T12:51:36.639670042Z" level=info msg="StartContainer for \"fd0b0c3c1c551a88c16f768e7ca1126ff9825677242f90ded778caed28d4c846\" returns successfully" Jan 15 12:51:37.247163 kubelet[3253]: I0115 12:51:37.246915 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-599d8cb64-xk5xp" podStartSLOduration=34.511190173 podStartE2EDuration="37.246897159s" podCreationTimestamp="2025-01-15 12:51:00 +0000 UTC" firstStartedPulling="2025-01-15 12:51:30.677650883 +0000 UTC m=+50.843134233" lastFinishedPulling="2025-01-15 12:51:33.413357869 +0000 UTC m=+53.578841219" observedRunningTime="2025-01-15 12:51:34.245944293 +0000 UTC m=+54.411427603" watchObservedRunningTime="2025-01-15 12:51:37.246897159 +0000 UTC m=+57.412380509" Jan 15 12:51:37.247610 kubelet[3253]: I0115 12:51:37.247330 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67c6b9d78c-8b976" podStartSLOduration=29.519023999 podStartE2EDuration="35.24731944s" podCreationTimestamp="2025-01-15 12:51:02 +0000 UTC" firstStartedPulling="2025-01-15 12:51:30.764967188 +0000 UTC m=+50.930450538" lastFinishedPulling="2025-01-15 12:51:36.493262629 +0000 UTC m=+56.658745979" observedRunningTime="2025-01-15 12:51:37.246342918 +0000 UTC m=+57.411826268" watchObservedRunningTime="2025-01-15 12:51:37.24731944 +0000 UTC m=+57.412802790" Jan 15 12:51:37.861084 containerd[1702]: time="2025-01-15T12:51:37.861024405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:37.864126 containerd[1702]: time="2025-01-15T12:51:37.864076328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 15 12:51:37.868976 containerd[1702]: time="2025-01-15T12:51:37.868305613Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:37.873878 containerd[1702]: time="2025-01-15T12:51:37.873824940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:37.874564 containerd[1702]: time="2025-01-15T12:51:37.874521381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.380489911s" Jan 15 12:51:37.874564 containerd[1702]: time="2025-01-15T12:51:37.874560261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 15 12:51:37.877005 containerd[1702]: time="2025-01-15T12:51:37.876853623Z" level=info msg="CreateContainer within sandbox \"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 15 12:51:37.920947 containerd[1702]: time="2025-01-15T12:51:37.920850755Z" level=info msg="CreateContainer within sandbox \"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0e3d506ffa98e904927844097e7437b7f8f15250408ae018f19e29ed5f38c1f8\"" Jan 15 12:51:37.920947 containerd[1702]: time="2025-01-15T12:51:37.921607596Z" level=info msg="StartContainer for \"0e3d506ffa98e904927844097e7437b7f8f15250408ae018f19e29ed5f38c1f8\"" Jan 15 12:51:37.967882 systemd[1]: Started cri-containerd-0e3d506ffa98e904927844097e7437b7f8f15250408ae018f19e29ed5f38c1f8.scope - libcontainer container 0e3d506ffa98e904927844097e7437b7f8f15250408ae018f19e29ed5f38c1f8. Jan 15 12:51:38.002073 containerd[1702]: time="2025-01-15T12:51:38.000651090Z" level=info msg="StartContainer for \"0e3d506ffa98e904927844097e7437b7f8f15250408ae018f19e29ed5f38c1f8\" returns successfully" Jan 15 12:51:38.091815 kubelet[3253]: I0115 12:51:38.091773 3253 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 15 12:51:38.092909 kubelet[3253]: I0115 12:51:38.092889 3253 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 15 12:51:38.250453 kubelet[3253]: I0115 12:51:38.250387 3253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kw7xv" podStartSLOduration=26.730743139 podStartE2EDuration="36.250369505s" podCreationTimestamp="2025-01-15 12:51:02 +0000 UTC" firstStartedPulling="2025-01-15 12:51:28.355632536 +0000 UTC m=+48.521115886" lastFinishedPulling="2025-01-15 12:51:37.875258902 +0000 UTC m=+58.040742252" observedRunningTime="2025-01-15 12:51:38.250293985 +0000 UTC m=+58.415777335" watchObservedRunningTime="2025-01-15 12:51:38.250369505 +0000 UTC m=+58.415852815" Jan 15 12:51:39.963466 containerd[1702]: time="2025-01-15T12:51:39.963140609Z" level=info msg="StopPodSandbox for \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\"" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.007 [WARNING][5672] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b15694bf-ebfd-4bd2-8276-f46e85d79323", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f", Pod:"csi-node-driver-kw7xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6404326b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.007 [INFO][5672] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.007 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" iface="eth0" netns="" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.007 [INFO][5672] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.007 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.027 [INFO][5678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.027 [INFO][5678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.027 [INFO][5678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.042 [WARNING][5678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.042 [INFO][5678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.043 [INFO][5678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.047006 containerd[1702]: 2025-01-15 12:51:40.045 [INFO][5672] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.047664 containerd[1702]: time="2025-01-15T12:51:40.047516068Z" level=info msg="TearDown network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\" successfully" Jan 15 12:51:40.047664 containerd[1702]: time="2025-01-15T12:51:40.047546068Z" level=info msg="StopPodSandbox for \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\" returns successfully" Jan 15 12:51:40.048385 containerd[1702]: time="2025-01-15T12:51:40.048355989Z" level=info msg="RemovePodSandbox for \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\"" Jan 15 12:51:40.048385 containerd[1702]: time="2025-01-15T12:51:40.048416429Z" level=info msg="Forcibly stopping sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\"" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.094 [WARNING][5698] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b15694bf-ebfd-4bd2-8276-f46e85d79323", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"1d2a3927f211622560ceecba363099e6fe21bcd01945fc721182f1c47da5de5f", Pod:"csi-node-driver-kw7xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6404326b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.095 [INFO][5698] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.095 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" iface="eth0" netns="" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.095 [INFO][5698] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.095 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.119 [INFO][5704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.119 [INFO][5704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.119 [INFO][5704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.129 [WARNING][5704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.129 [INFO][5704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" HandleID="k8s-pod-network.6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Workload="ci--4081.3.0--a--f89ceb891c-k8s-csi--node--driver--kw7xv-eth0" Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.130 [INFO][5704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.133817 containerd[1702]: 2025-01-15 12:51:40.131 [INFO][5698] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078" Jan 15 12:51:40.133817 containerd[1702]: time="2025-01-15T12:51:40.133748810Z" level=info msg="TearDown network for sandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\" successfully" Jan 15 12:51:40.146917 containerd[1702]: time="2025-01-15T12:51:40.146670905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:40.146917 containerd[1702]: time="2025-01-15T12:51:40.146764346Z" level=info msg="RemovePodSandbox \"6be4e1421b617020ef903dec49996455564dca3fcaab0f87459124f5934c4078\" returns successfully" Jan 15 12:51:40.147787 containerd[1702]: time="2025-01-15T12:51:40.147472546Z" level=info msg="StopPodSandbox for \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\"" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.196 [WARNING][5722] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"14500a27-f3f6-499b-94fc-b167797dfc9f", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8", Pod:"calico-apiserver-599d8cb64-xk5xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali389aa636f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.196 [INFO][5722] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.196 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" iface="eth0" netns="" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.196 [INFO][5722] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.196 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.216 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.216 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.216 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.224 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.224 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.225 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.228936 containerd[1702]: 2025-01-15 12:51:40.227 [INFO][5722] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.230651 containerd[1702]: time="2025-01-15T12:51:40.230385284Z" level=info msg="TearDown network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\" successfully" Jan 15 12:51:40.230651 containerd[1702]: time="2025-01-15T12:51:40.230447804Z" level=info msg="StopPodSandbox for \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\" returns successfully" Jan 15 12:51:40.231528 containerd[1702]: time="2025-01-15T12:51:40.231377406Z" level=info msg="RemovePodSandbox for \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\"" Jan 15 12:51:40.231528 containerd[1702]: time="2025-01-15T12:51:40.231410046Z" level=info msg="Forcibly stopping sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\"" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.270 [WARNING][5749] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"14500a27-f3f6-499b-94fc-b167797dfc9f", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"49ecfc1f8da006226333ab5e9f004f20067d33e5962fa93abdf70a35a56fb8b8", Pod:"calico-apiserver-599d8cb64-xk5xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali389aa636f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.270 [INFO][5749] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.270 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" iface="eth0" netns="" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.270 [INFO][5749] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.270 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.293 [INFO][5755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.294 [INFO][5755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.294 [INFO][5755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.303 [WARNING][5755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.303 [INFO][5755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" HandleID="k8s-pod-network.f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--xk5xp-eth0" Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.305 [INFO][5755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.308468 containerd[1702]: 2025-01-15 12:51:40.306 [INFO][5749] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82" Jan 15 12:51:40.309918 containerd[1702]: time="2025-01-15T12:51:40.308710257Z" level=info msg="TearDown network for sandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\" successfully" Jan 15 12:51:40.318619 containerd[1702]: time="2025-01-15T12:51:40.318579869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:40.318844 containerd[1702]: time="2025-01-15T12:51:40.318823309Z" level=info msg="RemovePodSandbox \"f4518d5f2aae84e9c97c863e98e9ea2d758f39a559efe422ea97931ffc27ca82\" returns successfully" Jan 15 12:51:40.319442 containerd[1702]: time="2025-01-15T12:51:40.319419550Z" level=info msg="StopPodSandbox for \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\"" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.355 [WARNING][5774] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad", Pod:"coredns-7db6d8ff4d-nh4f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74af2bca0a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.355 [INFO][5774] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.355 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" iface="eth0" netns="" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.355 [INFO][5774] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.355 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.376 [INFO][5781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.376 [INFO][5781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.376 [INFO][5781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.387 [WARNING][5781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.387 [INFO][5781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.388 [INFO][5781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.392187 containerd[1702]: 2025-01-15 12:51:40.390 [INFO][5774] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.392893 containerd[1702]: time="2025-01-15T12:51:40.392671476Z" level=info msg="TearDown network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\" successfully" Jan 15 12:51:40.392893 containerd[1702]: time="2025-01-15T12:51:40.392757556Z" level=info msg="StopPodSandbox for \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\" returns successfully" Jan 15 12:51:40.393456 containerd[1702]: time="2025-01-15T12:51:40.393382917Z" level=info msg="RemovePodSandbox for \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\"" Jan 15 12:51:40.393456 containerd[1702]: time="2025-01-15T12:51:40.393418837Z" level=info msg="Forcibly stopping sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\"" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.428 [WARNING][5799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f965f94f-e6f0-4cbe-9d52-87cecc0ddc61", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"35b45bf3fbcb4c92225eedfe001081672b6e31cf62e8099be4ecda28400bf7ad", Pod:"coredns-7db6d8ff4d-nh4f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74af2bca0a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.428 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.428 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" iface="eth0" netns="" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.428 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.428 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.449 [INFO][5805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.449 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.449 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.457 [WARNING][5805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.457 [INFO][5805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" HandleID="k8s-pod-network.d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--nh4f9-eth0" Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.459 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.462070 containerd[1702]: 2025-01-15 12:51:40.460 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1" Jan 15 12:51:40.463034 containerd[1702]: time="2025-01-15T12:51:40.462527959Z" level=info msg="TearDown network for sandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\" successfully" Jan 15 12:51:40.470191 containerd[1702]: time="2025-01-15T12:51:40.470022168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:40.470191 containerd[1702]: time="2025-01-15T12:51:40.470094888Z" level=info msg="RemovePodSandbox \"d88a81ef455597e683b8603070fc44610bf7319c41d6803d8c30c260fbfab6e1\" returns successfully" Jan 15 12:51:40.471002 containerd[1702]: time="2025-01-15T12:51:40.470749888Z" level=info msg="StopPodSandbox for \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\"" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.508 [WARNING][5823] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0", GenerateName:"calico-kube-controllers-67c6b9d78c-", Namespace:"calico-system", SelfLink:"", UID:"39441957-8677-4d6c-967c-9eea47f5a2b2", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c6b9d78c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446", Pod:"calico-kube-controllers-67c6b9d78c-8b976", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica23d9d760f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.508 [INFO][5823] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.508 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" iface="eth0" netns="" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.508 [INFO][5823] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.508 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.529 [INFO][5829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.529 [INFO][5829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.530 [INFO][5829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.538 [WARNING][5829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.538 [INFO][5829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.539 [INFO][5829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.542386 containerd[1702]: 2025-01-15 12:51:40.540 [INFO][5823] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.545535 containerd[1702]: time="2025-01-15T12:51:40.542861573Z" level=info msg="TearDown network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\" successfully" Jan 15 12:51:40.545535 containerd[1702]: time="2025-01-15T12:51:40.542910613Z" level=info msg="StopPodSandbox for \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\" returns successfully" Jan 15 12:51:40.545535 containerd[1702]: time="2025-01-15T12:51:40.544434135Z" level=info msg="RemovePodSandbox for \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\"" Jan 15 12:51:40.545535 containerd[1702]: time="2025-01-15T12:51:40.544465295Z" level=info msg="Forcibly stopping sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\"" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.581 [WARNING][5847] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0", GenerateName:"calico-kube-controllers-67c6b9d78c-", Namespace:"calico-system", SelfLink:"", UID:"39441957-8677-4d6c-967c-9eea47f5a2b2", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c6b9d78c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"4b3466bf1746b8fe795e561205a61142a65b6f83267c66502b7382d3cb8e7446", Pod:"calico-kube-controllers-67c6b9d78c-8b976", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica23d9d760f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.581 [INFO][5847] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.581 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" iface="eth0" netns="" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.581 [INFO][5847] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.581 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.604 [INFO][5853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.604 [INFO][5853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.604 [INFO][5853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.613 [WARNING][5853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.614 [INFO][5853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" HandleID="k8s-pod-network.1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--kube--controllers--67c6b9d78c--8b976-eth0" Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.615 [INFO][5853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.618738 containerd[1702]: 2025-01-15 12:51:40.617 [INFO][5847] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33" Jan 15 12:51:40.618738 containerd[1702]: time="2025-01-15T12:51:40.618590699Z" level=info msg="TearDown network for sandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\" successfully" Jan 15 12:51:40.627791 containerd[1702]: time="2025-01-15T12:51:40.627514710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:40.627791 containerd[1702]: time="2025-01-15T12:51:40.627635230Z" level=info msg="RemovePodSandbox \"1039d43aabe7cab8905fea9d2d34a40a8318493ec4e1d9902e27170ad6a10a33\" returns successfully" Jan 15 12:51:40.628607 containerd[1702]: time="2025-01-15T12:51:40.628085070Z" level=info msg="StopPodSandbox for \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\"" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.666 [WARNING][5871] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"25df612d-7c87-41cf-a49d-92de2f6930d0", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e", Pod:"calico-apiserver-599d8cb64-5f2nt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6eb8f35cf84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.666 [INFO][5871] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.666 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" iface="eth0" netns="" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.666 [INFO][5871] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.666 [INFO][5871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.686 [INFO][5877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.686 [INFO][5877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.686 [INFO][5877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.695 [WARNING][5877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.695 [INFO][5877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.697 [INFO][5877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.701147 containerd[1702]: 2025-01-15 12:51:40.698 [INFO][5871] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.701147 containerd[1702]: time="2025-01-15T12:51:40.701024073Z" level=info msg="TearDown network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\" successfully" Jan 15 12:51:40.701147 containerd[1702]: time="2025-01-15T12:51:40.701048633Z" level=info msg="StopPodSandbox for \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\" returns successfully" Jan 15 12:51:40.702244 containerd[1702]: time="2025-01-15T12:51:40.701921314Z" level=info msg="RemovePodSandbox for \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\"" Jan 15 12:51:40.702244 containerd[1702]: time="2025-01-15T12:51:40.701956234Z" level=info msg="Forcibly stopping sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\"" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.738 [WARNING][5895] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0", GenerateName:"calico-apiserver-599d8cb64-", Namespace:"calico-apiserver", SelfLink:"", UID:"25df612d-7c87-41cf-a49d-92de2f6930d0", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599d8cb64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"d658fb73ee87508e57343d595c0f9994fb0670f614d7829b455d5d1ff68d019e", Pod:"calico-apiserver-599d8cb64-5f2nt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6eb8f35cf84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.739 [INFO][5895] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.739 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" iface="eth0" netns="" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.739 [INFO][5895] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.739 [INFO][5895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.758 [INFO][5902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.758 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.758 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.767 [WARNING][5902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.767 [INFO][5902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" HandleID="k8s-pod-network.940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Workload="ci--4081.3.0--a--f89ceb891c-k8s-calico--apiserver--599d8cb64--5f2nt-eth0" Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.768 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:40.772902 containerd[1702]: 2025-01-15 12:51:40.770 [INFO][5895] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe" Jan 15 12:51:40.772902 containerd[1702]: time="2025-01-15T12:51:40.771765274Z" level=info msg="TearDown network for sandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\" successfully" Jan 15 12:51:41.137879 containerd[1702]: time="2025-01-15T12:51:41.137817931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:41.138711 containerd[1702]: time="2025-01-15T12:51:41.137920451Z" level=info msg="RemovePodSandbox \"940d8040ce125679d3fb8301c1552bf489fb3762eb0e6bbb5ec47463660d0bfe\" returns successfully" Jan 15 12:51:41.138711 containerd[1702]: time="2025-01-15T12:51:41.138429612Z" level=info msg="StopPodSandbox for \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\"" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.176 [WARNING][5920] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e3b7d0-351b-4511-9466-e0a6b59b0959", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719", Pod:"coredns-7db6d8ff4d-9b4kr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b778bc43b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.176 [INFO][5920] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.176 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" iface="eth0" netns="" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.176 [INFO][5920] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.176 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.196 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.196 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.196 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.205 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.205 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.206 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:41.210390 containerd[1702]: 2025-01-15 12:51:41.208 [INFO][5920] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.210390 containerd[1702]: time="2025-01-15T12:51:41.210199174Z" level=info msg="TearDown network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\" successfully" Jan 15 12:51:41.210390 containerd[1702]: time="2025-01-15T12:51:41.210225694Z" level=info msg="StopPodSandbox for \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\" returns successfully" Jan 15 12:51:41.211562 containerd[1702]: time="2025-01-15T12:51:41.211273055Z" level=info msg="RemovePodSandbox for \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\"" Jan 15 12:51:41.211562 containerd[1702]: time="2025-01-15T12:51:41.211306935Z" level=info msg="Forcibly stopping sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\"" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.258 [WARNING][5945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e3b7d0-351b-4511-9466-e0a6b59b0959", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-f89ceb891c", ContainerID:"d9318501281f2012b1e79854dcf515070de53d145cc09cc85ee96d0a66f51719", Pod:"coredns-7db6d8ff4d-9b4kr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b778bc43b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.258 [INFO][5945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.258 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" iface="eth0" netns="" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.258 [INFO][5945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.258 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.278 [INFO][5951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.278 [INFO][5951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.278 [INFO][5951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.286 [WARNING][5951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.286 [INFO][5951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" HandleID="k8s-pod-network.95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Workload="ci--4081.3.0--a--f89ceb891c-k8s-coredns--7db6d8ff4d--9b4kr-eth0" Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.287 [INFO][5951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:41.290795 containerd[1702]: 2025-01-15 12:51:41.289 [INFO][5945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf" Jan 15 12:51:41.291809 containerd[1702]: time="2025-01-15T12:51:41.290890586Z" level=info msg="TearDown network for sandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\" successfully" Jan 15 12:51:41.297167 containerd[1702]: time="2025-01-15T12:51:41.297070153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:41.297385 containerd[1702]: time="2025-01-15T12:51:41.297151553Z" level=info msg="RemovePodSandbox \"95f143b43792ec73b4e112e6d11fa1799d2594fb9ab32f019e00b05383cdfcaf\" returns successfully" Jan 15 12:51:55.520615 kubelet[3253]: I0115 12:51:55.520388 3253 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:52:13.330633 kubelet[3253]: I0115 12:52:13.329765 3253 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:53:37.890435 systemd[1]: Started sshd@7-10.200.20.39:22-10.200.16.10:58646.service - OpenSSH per-connection server daemon (10.200.16.10:58646). Jan 15 12:53:38.347151 sshd[6226]: Accepted publickey for core from 10.200.16.10 port 58646 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:38.349582 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:38.354452 systemd-logind[1665]: New session 10 of user core. Jan 15 12:53:38.356898 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 12:53:38.756960 sshd[6226]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:38.759796 systemd[1]: sshd@7-10.200.20.39:22-10.200.16.10:58646.service: Deactivated successfully. Jan 15 12:53:38.762255 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 12:53:38.764616 systemd-logind[1665]: Session 10 logged out. Waiting for processes to exit. Jan 15 12:53:38.765679 systemd-logind[1665]: Removed session 10. Jan 15 12:53:43.832874 systemd[1]: Started sshd@8-10.200.20.39:22-10.200.16.10:58658.service - OpenSSH per-connection server daemon (10.200.16.10:58658). Jan 15 12:53:44.257604 sshd[6245]: Accepted publickey for core from 10.200.16.10 port 58658 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:44.259103 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:44.263672 systemd-logind[1665]: New session 11 of user core. Jan 15 12:53:44.270928 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 12:53:44.626060 sshd[6245]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:44.629116 systemd[1]: sshd@8-10.200.20.39:22-10.200.16.10:58658.service: Deactivated successfully. Jan 15 12:53:44.632168 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 12:53:44.633870 systemd-logind[1665]: Session 11 logged out. Waiting for processes to exit. Jan 15 12:53:44.635376 systemd-logind[1665]: Removed session 11. Jan 15 12:53:49.703931 systemd[1]: Started sshd@9-10.200.20.39:22-10.200.16.10:50180.service - OpenSSH per-connection server daemon (10.200.16.10:50180). Jan 15 12:53:50.122163 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 50180 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:50.123549 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:50.127424 systemd-logind[1665]: New session 12 of user core. Jan 15 12:53:50.134892 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 12:53:50.495363 sshd[6278]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:50.498898 systemd[1]: sshd@9-10.200.20.39:22-10.200.16.10:50180.service: Deactivated successfully. Jan 15 12:53:50.500568 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 12:53:50.504958 systemd-logind[1665]: Session 12 logged out. Waiting for processes to exit. Jan 15 12:53:50.506219 systemd-logind[1665]: Removed session 12. Jan 15 12:53:50.580998 systemd[1]: Started sshd@10-10.200.20.39:22-10.200.16.10:50190.service - OpenSSH per-connection server daemon (10.200.16.10:50190). Jan 15 12:53:51.028405 sshd[6292]: Accepted publickey for core from 10.200.16.10 port 50190 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:51.029839 sshd[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:51.034194 systemd-logind[1665]: New session 13 of user core. Jan 15 12:53:51.041885 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 12:53:51.459861 sshd[6292]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:51.463224 systemd-logind[1665]: Session 13 logged out. Waiting for processes to exit. Jan 15 12:53:51.464090 systemd[1]: sshd@10-10.200.20.39:22-10.200.16.10:50190.service: Deactivated successfully. Jan 15 12:53:51.466062 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 12:53:51.467308 systemd-logind[1665]: Removed session 13. Jan 15 12:53:51.539599 systemd[1]: Started sshd@11-10.200.20.39:22-10.200.16.10:50194.service - OpenSSH per-connection server daemon (10.200.16.10:50194). Jan 15 12:53:51.956891 sshd[6303]: Accepted publickey for core from 10.200.16.10 port 50194 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:51.958361 sshd[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:51.962943 systemd-logind[1665]: New session 14 of user core. Jan 15 12:53:51.968914 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 12:53:52.322177 sshd[6303]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:52.326064 systemd[1]: sshd@11-10.200.20.39:22-10.200.16.10:50194.service: Deactivated successfully. Jan 15 12:53:52.328553 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 12:53:52.329453 systemd-logind[1665]: Session 14 logged out. Waiting for processes to exit. Jan 15 12:53:52.330654 systemd-logind[1665]: Removed session 14. Jan 15 12:53:57.413970 systemd[1]: Started sshd@12-10.200.20.39:22-10.200.16.10:45582.service - OpenSSH per-connection server daemon (10.200.16.10:45582). Jan 15 12:53:57.863631 sshd[6323]: Accepted publickey for core from 10.200.16.10 port 45582 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:57.865037 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:57.869822 systemd-logind[1665]: New session 15 of user core. Jan 15 12:53:57.877897 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 12:53:58.262797 sshd[6323]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:58.266565 systemd[1]: sshd@12-10.200.20.39:22-10.200.16.10:45582.service: Deactivated successfully. Jan 15 12:53:58.268882 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 12:53:58.269554 systemd-logind[1665]: Session 15 logged out. Waiting for processes to exit. Jan 15 12:53:58.270533 systemd-logind[1665]: Removed session 15. Jan 15 12:54:03.344581 systemd[1]: Started sshd@13-10.200.20.39:22-10.200.16.10:45598.service - OpenSSH per-connection server daemon (10.200.16.10:45598). Jan 15 12:54:03.794631 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 45598 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:03.796191 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:03.802173 systemd-logind[1665]: New session 16 of user core. Jan 15 12:54:03.806696 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 12:54:04.190434 sshd[6359]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:04.193756 systemd-logind[1665]: Session 16 logged out. Waiting for processes to exit. Jan 15 12:54:04.194327 systemd[1]: sshd@13-10.200.20.39:22-10.200.16.10:45598.service: Deactivated successfully. Jan 15 12:54:04.196571 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 12:54:04.197768 systemd-logind[1665]: Removed session 16. Jan 15 12:54:09.277003 systemd[1]: Started sshd@14-10.200.20.39:22-10.200.16.10:36648.service - OpenSSH per-connection server daemon (10.200.16.10:36648). Jan 15 12:54:09.723156 sshd[6375]: Accepted publickey for core from 10.200.16.10 port 36648 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:09.724646 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:09.728827 systemd-logind[1665]: New session 17 of user core. Jan 15 12:54:09.737909 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 12:54:10.152417 sshd[6375]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:10.156177 systemd[1]: sshd@14-10.200.20.39:22-10.200.16.10:36648.service: Deactivated successfully. Jan 15 12:54:10.159537 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 12:54:10.162840 systemd-logind[1665]: Session 17 logged out. Waiting for processes to exit. Jan 15 12:54:10.164547 systemd-logind[1665]: Removed session 17. Jan 15 12:54:15.233283 systemd[1]: Started sshd@15-10.200.20.39:22-10.200.16.10:36664.service - OpenSSH per-connection server daemon (10.200.16.10:36664). Jan 15 12:54:15.653646 sshd[6415]: Accepted publickey for core from 10.200.16.10 port 36664 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:15.655026 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:15.659150 systemd-logind[1665]: New session 18 of user core. Jan 15 12:54:15.663891 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 12:54:16.025441 sshd[6415]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:16.029527 systemd-logind[1665]: Session 18 logged out. Waiting for processes to exit. Jan 15 12:54:16.030281 systemd[1]: sshd@15-10.200.20.39:22-10.200.16.10:36664.service: Deactivated successfully. Jan 15 12:54:16.032970 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 12:54:16.034269 systemd-logind[1665]: Removed session 18. Jan 15 12:54:16.108063 systemd[1]: Started sshd@16-10.200.20.39:22-10.200.16.10:55724.service - OpenSSH per-connection server daemon (10.200.16.10:55724). Jan 15 12:54:16.525691 sshd[6428]: Accepted publickey for core from 10.200.16.10 port 55724 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:16.527176 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:16.531013 systemd-logind[1665]: New session 19 of user core. Jan 15 12:54:16.536872 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 12:54:16.996382 sshd[6428]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:17.000616 systemd[1]: sshd@16-10.200.20.39:22-10.200.16.10:55724.service: Deactivated successfully. Jan 15 12:54:17.002411 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 12:54:17.003635 systemd-logind[1665]: Session 19 logged out. Waiting for processes to exit. Jan 15 12:54:17.004916 systemd-logind[1665]: Removed session 19. Jan 15 12:54:17.078092 systemd[1]: Started sshd@17-10.200.20.39:22-10.200.16.10:55732.service - OpenSSH per-connection server daemon (10.200.16.10:55732). Jan 15 12:54:17.530218 sshd[6439]: Accepted publickey for core from 10.200.16.10 port 55732 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:17.531754 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:17.535875 systemd-logind[1665]: New session 20 of user core. Jan 15 12:54:17.544063 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 12:54:19.544355 sshd[6439]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:19.547900 systemd[1]: sshd@17-10.200.20.39:22-10.200.16.10:55732.service: Deactivated successfully. Jan 15 12:54:19.550669 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 12:54:19.551867 systemd-logind[1665]: Session 20 logged out. Waiting for processes to exit. Jan 15 12:54:19.552858 systemd-logind[1665]: Removed session 20. Jan 15 12:54:19.629041 systemd[1]: Started sshd@18-10.200.20.39:22-10.200.16.10:55738.service - OpenSSH per-connection server daemon (10.200.16.10:55738). Jan 15 12:54:20.048349 sshd[6459]: Accepted publickey for core from 10.200.16.10 port 55738 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:20.050336 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:20.059204 systemd-logind[1665]: New session 21 of user core. Jan 15 12:54:20.064130 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 12:54:20.559837 sshd[6459]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:20.564313 systemd-logind[1665]: Session 21 logged out. Waiting for processes to exit. Jan 15 12:54:20.564574 systemd[1]: sshd@18-10.200.20.39:22-10.200.16.10:55738.service: Deactivated successfully. Jan 15 12:54:20.567453 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 12:54:20.571906 systemd-logind[1665]: Removed session 21. Jan 15 12:54:20.641930 systemd[1]: Started sshd@19-10.200.20.39:22-10.200.16.10:55744.service - OpenSSH per-connection server daemon (10.200.16.10:55744). Jan 15 12:54:21.093177 sshd[6470]: Accepted publickey for core from 10.200.16.10 port 55744 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:21.094609 sshd[6470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:21.098590 systemd-logind[1665]: New session 22 of user core. Jan 15 12:54:21.106924 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 12:54:21.500349 sshd[6470]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:21.503365 systemd-logind[1665]: Session 22 logged out. Waiting for processes to exit. Jan 15 12:54:21.505307 systemd[1]: sshd@19-10.200.20.39:22-10.200.16.10:55744.service: Deactivated successfully. Jan 15 12:54:21.508686 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 12:54:21.510646 systemd-logind[1665]: Removed session 22. Jan 15 12:54:26.589193 systemd[1]: Started sshd@20-10.200.20.39:22-10.200.16.10:34468.service - OpenSSH per-connection server daemon (10.200.16.10:34468). Jan 15 12:54:27.040842 sshd[6488]: Accepted publickey for core from 10.200.16.10 port 34468 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:27.042565 sshd[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:27.047047 systemd-logind[1665]: New session 23 of user core. Jan 15 12:54:27.053921 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 12:54:27.441963 sshd[6488]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:27.445756 systemd-logind[1665]: Session 23 logged out. Waiting for processes to exit. Jan 15 12:54:27.446523 systemd[1]: sshd@20-10.200.20.39:22-10.200.16.10:34468.service: Deactivated successfully. Jan 15 12:54:27.449179 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 12:54:27.450379 systemd-logind[1665]: Removed session 23. Jan 15 12:54:32.519847 systemd[1]: Started sshd@21-10.200.20.39:22-10.200.16.10:34482.service - OpenSSH per-connection server daemon (10.200.16.10:34482). Jan 15 12:54:32.949930 sshd[6550]: Accepted publickey for core from 10.200.16.10 port 34482 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:32.951455 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:32.955453 systemd-logind[1665]: New session 24 of user core. Jan 15 12:54:32.961109 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 12:54:33.317547 sshd[6550]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:33.320593 systemd[1]: sshd@21-10.200.20.39:22-10.200.16.10:34482.service: Deactivated successfully. Jan 15 12:54:33.322660 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 12:54:33.325461 systemd-logind[1665]: Session 24 logged out. Waiting for processes to exit. Jan 15 12:54:33.326496 systemd-logind[1665]: Removed session 24. Jan 15 12:54:38.398096 systemd[1]: Started sshd@22-10.200.20.39:22-10.200.16.10:55580.service - OpenSSH per-connection server daemon (10.200.16.10:55580). Jan 15 12:54:38.816838 sshd[6563]: Accepted publickey for core from 10.200.16.10 port 55580 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:38.818360 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:38.822490 systemd-logind[1665]: New session 25 of user core. Jan 15 12:54:38.830914 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 12:54:39.196000 sshd[6563]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:39.199610 systemd[1]: sshd@22-10.200.20.39:22-10.200.16.10:55580.service: Deactivated successfully. Jan 15 12:54:39.202755 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 12:54:39.203513 systemd-logind[1665]: Session 25 logged out. Waiting for processes to exit. Jan 15 12:54:39.204482 systemd-logind[1665]: Removed session 25. Jan 15 12:54:44.277788 systemd[1]: Started sshd@23-10.200.20.39:22-10.200.16.10:55586.service - OpenSSH per-connection server daemon (10.200.16.10:55586). Jan 15 12:54:44.731525 sshd[6590]: Accepted publickey for core from 10.200.16.10 port 55586 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:44.733393 sshd[6590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:44.738611 systemd-logind[1665]: New session 26 of user core. Jan 15 12:54:44.743887 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 12:54:45.118136 sshd[6590]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:45.122302 systemd[1]: sshd@23-10.200.20.39:22-10.200.16.10:55586.service: Deactivated successfully. Jan 15 12:54:45.125123 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 12:54:45.126778 systemd-logind[1665]: Session 26 logged out. Waiting for processes to exit. Jan 15 12:54:45.127862 systemd-logind[1665]: Removed session 26. Jan 15 12:54:50.204336 systemd[1]: Started sshd@24-10.200.20.39:22-10.200.16.10:59322.service - OpenSSH per-connection server daemon (10.200.16.10:59322). Jan 15 12:54:50.655110 sshd[6622]: Accepted publickey for core from 10.200.16.10 port 59322 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:50.656523 sshd[6622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:50.661933 systemd-logind[1665]: New session 27 of user core. Jan 15 12:54:50.668967 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 12:54:51.042047 sshd[6622]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:51.045460 systemd-logind[1665]: Session 27 logged out. Waiting for processes to exit. Jan 15 12:54:51.045856 systemd[1]: sshd@24-10.200.20.39:22-10.200.16.10:59322.service: Deactivated successfully. Jan 15 12:54:51.048332 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 12:54:51.049410 systemd-logind[1665]: Removed session 27. Jan 15 12:54:56.125953 systemd[1]: Started sshd@25-10.200.20.39:22-10.200.16.10:54388.service - OpenSSH per-connection server daemon (10.200.16.10:54388). Jan 15 12:54:56.537999 sshd[6637]: Accepted publickey for core from 10.200.16.10 port 54388 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:56.539449 sshd[6637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:56.543741 systemd-logind[1665]: New session 28 of user core. Jan 15 12:54:56.549859 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 15 12:54:56.907364 sshd[6637]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:56.911629 systemd[1]: sshd@25-10.200.20.39:22-10.200.16.10:54388.service: Deactivated successfully. Jan 15 12:54:56.914062 systemd[1]: session-28.scope: Deactivated successfully. Jan 15 12:54:56.914688 systemd-logind[1665]: Session 28 logged out. Waiting for processes to exit. Jan 15 12:54:56.915780 systemd-logind[1665]: Removed session 28.