Jan 30 14:10:38.291114 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:10:38.291135 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:10:38.291143 kernel: KASLR enabled Jan 30 14:10:38.291149 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 14:10:38.291156 kernel: printk: bootconsole [pl11] enabled Jan 30 14:10:38.291162 kernel: efi: EFI v2.7 by EDK II Jan 30 14:10:38.291169 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 30 14:10:38.291185 kernel: random: crng init done Jan 30 14:10:38.291192 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:38.291198 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 14:10:38.291204 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291210 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291218 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 14:10:38.291224 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291232 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291238 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291244 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291252 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291259 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291265 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 14:10:38.291271 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291278 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 14:10:38.291284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 14:10:38.291290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 14:10:38.291297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 14:10:38.291303 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 14:10:38.291309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 14:10:38.291316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 14:10:38.291324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 14:10:38.291330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 14:10:38.291336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 14:10:38.291343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 14:10:38.291349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 14:10:38.291355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 14:10:38.291361 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 14:10:38.291368 kernel: Zone ranges: Jan 30 14:10:38.291374 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 14:10:38.291380 kernel: DMA32 empty Jan 30 14:10:38.291387 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:38.291393 kernel: Movable zone start for each node Jan 30 14:10:38.291403 kernel: Early memory node ranges Jan 30 14:10:38.291410 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 14:10:38.291417 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 30 14:10:38.291424 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 14:10:38.291430 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 14:10:38.291439 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 14:10:38.291445 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 14:10:38.291452 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:38.291459 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 14:10:38.291466 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 14:10:38.291472 kernel: psci: probing for conduit method from ACPI. Jan 30 14:10:38.291479 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:10:38.291486 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:10:38.291493 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 14:10:38.291499 kernel: psci: SMC Calling Convention v1.4 Jan 30 14:10:38.291506 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 14:10:38.291513 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 14:10:38.291521 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:10:38.291528 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:10:38.291534 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:10:38.291541 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:10:38.291548 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:10:38.291554 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:10:38.291561 kernel: CPU features: detected: Spectre-BHB Jan 30 14:10:38.291568 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:10:38.291574 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:10:38.291581 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:10:38.291587 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 14:10:38.291596 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:10:38.291602 kernel: alternatives: applying boot alternatives Jan 30 14:10:38.291610 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:38.291618 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:38.291624 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:10:38.291631 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:38.291638 kernel: Fallback order for Node 0: 0 Jan 30 14:10:38.291645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 14:10:38.291651 kernel: Policy zone: Normal Jan 30 14:10:38.291658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:38.291665 kernel: software IO TLB: area num 2. Jan 30 14:10:38.291673 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 30 14:10:38.291680 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 30 14:10:38.291687 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:10:38.291694 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:38.291701 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:38.291708 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:10:38.291715 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:38.291722 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:38.291729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:38.291736 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:10:38.291742 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:10:38.291751 kernel: GICv3: 960 SPIs implemented Jan 30 14:10:38.291757 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:10:38.291764 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:10:38.291771 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:10:38.291778 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 14:10:38.291784 kernel: ITS: No ITS available, not enabling LPIs Jan 30 14:10:38.291791 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:38.291798 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:38.291805 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:10:38.291812 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:10:38.291819 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:10:38.291827 kernel: Console: colour dummy device 80x25 Jan 30 14:10:38.291834 kernel: printk: console [tty1] enabled Jan 30 14:10:38.291841 kernel: ACPI: Core revision 20230628 Jan 30 14:10:38.291848 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:10:38.291855 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:38.291862 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:38.291869 kernel: landlock: Up and running. Jan 30 14:10:38.291876 kernel: SELinux: Initializing. Jan 30 14:10:38.291883 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.291889 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.291898 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:38.291905 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:38.291912 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 14:10:38.291919 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 14:10:38.291926 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 14:10:38.291933 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:38.291940 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:38.291953 kernel: Remapping and enabling EFI services. Jan 30 14:10:38.291964 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:38.291972 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:10:38.291981 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 14:10:38.291992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:38.292001 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:10:38.292009 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:10:38.292017 kernel: SMP: Total of 2 processors activated. Jan 30 14:10:38.292026 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:10:38.292037 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 14:10:38.292046 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:10:38.292054 kernel: CPU features: detected: CRC32 instructions Jan 30 14:10:38.292063 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:10:38.292072 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:10:38.292080 kernel: CPU features: detected: Privileged Access Never Jan 30 14:10:38.292088 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:10:38.292097 kernel: alternatives: applying system-wide alternatives Jan 30 14:10:38.292106 kernel: devtmpfs: initialized Jan 30 14:10:38.292116 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:38.292124 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:10:38.292131 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:38.292138 kernel: SMBIOS 3.1.0 present. Jan 30 14:10:38.292147 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 14:10:38.292155 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:38.292164 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:10:38.292181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:10:38.292190 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:10:38.292201 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:38.292209 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:38.292218 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:38.292227 kernel: cpuidle: using governor menu Jan 30 14:10:38.292235 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:10:38.292243 kernel: ASID allocator initialised with 32768 entries Jan 30 14:10:38.292250 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:38.292257 kernel: Serial: AMBA PL011 UART driver Jan 30 14:10:38.292265 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:10:38.292274 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:10:38.292281 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:10:38.292289 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:38.292296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:38.292303 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:10:38.292311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:10:38.292318 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:38.292325 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:38.292333 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:10:38.292341 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:10:38.292349 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:38.292356 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:38.292363 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:38.292370 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:38.292377 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:10:38.292385 kernel: ACPI: Interpreter enabled Jan 30 14:10:38.292392 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:10:38.292399 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:10:38.292408 kernel: printk: console [ttyAMA0] enabled Jan 30 14:10:38.292415 kernel: printk: bootconsole [pl11] disabled Jan 30 14:10:38.292422 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 14:10:38.292429 kernel: iommu: Default domain type: Translated Jan 30 14:10:38.292437 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:10:38.292444 kernel: efivars: Registered efivars operations Jan 30 14:10:38.292451 kernel: vgaarb: loaded Jan 30 14:10:38.292459 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:10:38.292466 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:38.292475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:38.292482 kernel: pnp: PnP ACPI init Jan 30 14:10:38.292490 kernel: pnp: PnP ACPI: found 0 devices Jan 30 14:10:38.292497 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:38.292504 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:38.292512 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:10:38.292519 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:38.292526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:10:38.292533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:10:38.292543 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:10:38.292550 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.292557 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.292565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:38.292572 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:10:38.292579 kernel: kvm [1]: HYP mode not available Jan 30 14:10:38.292586 kernel: Initialise system trusted keyrings Jan 30 14:10:38.292594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:10:38.292601 kernel: Key type asymmetric registered Jan 30 14:10:38.292609 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:38.292617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:10:38.292624 kernel: io scheduler mq-deadline registered Jan 30 14:10:38.292631 kernel: io scheduler kyber registered Jan 30 14:10:38.292638 kernel: io scheduler bfq registered Jan 30 14:10:38.292645 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:38.292653 kernel: thunder_xcv, ver 1.0 Jan 30 14:10:38.292660 kernel: thunder_bgx, ver 1.0 Jan 30 14:10:38.292667 kernel: nicpf, ver 1.0 Jan 30 14:10:38.292675 kernel: nicvf, ver 1.0 Jan 30 14:10:38.292818 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:10:38.292891 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:10:37 UTC (1738246237) Jan 30 14:10:38.292901 kernel: efifb: probing for efifb Jan 30 14:10:38.292908 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 14:10:38.292916 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 14:10:38.292923 kernel: efifb: scrolling: redraw Jan 30 14:10:38.292931 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 14:10:38.292940 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:38.292948 kernel: fb0: EFI VGA frame buffer device Jan 30 14:10:38.292955 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 14:10:38.292962 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:38.292969 kernel: No ACPI PMU IRQ for CPU0 Jan 30 14:10:38.292976 kernel: No ACPI PMU IRQ for CPU1 Jan 30 14:10:38.292984 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 14:10:38.292991 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:10:38.292998 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:10:38.293007 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:38.293014 kernel: Segment Routing with IPv6 Jan 30 14:10:38.293021 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:38.293028 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:38.293035 kernel: Key type dns_resolver registered Jan 30 14:10:38.293043 kernel: registered taskstats version 1 Jan 30 14:10:38.293050 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:38.293057 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:10:38.293064 kernel: Key type .fscrypt registered Jan 30 14:10:38.293073 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:38.293080 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:10:38.293088 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:38.293095 kernel: ima: No architecture policies found Jan 30 14:10:38.293102 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:10:38.293109 kernel: clk: Disabling unused clocks Jan 30 14:10:38.293117 kernel: Freeing unused kernel memory: 39360K Jan 30 14:10:38.293124 kernel: Run /init as init process Jan 30 14:10:38.293131 kernel: with arguments: Jan 30 14:10:38.293140 kernel: /init Jan 30 14:10:38.293147 kernel: with environment: Jan 30 14:10:38.293154 kernel: HOME=/ Jan 30 14:10:38.293161 kernel: TERM=linux Jan 30 14:10:38.293168 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:38.293188 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:38.293198 systemd[1]: Detected virtualization microsoft. Jan 30 14:10:38.293206 systemd[1]: Detected architecture arm64. Jan 30 14:10:38.293215 systemd[1]: Running in initrd. Jan 30 14:10:38.293223 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:38.293230 systemd[1]: Hostname set to . Jan 30 14:10:38.293238 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:38.293246 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:38.293253 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:38.293261 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:38.293270 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:38.293279 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:38.293287 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:38.293295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:38.293305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:38.293313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:38.293321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:38.293330 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:38.293338 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:38.293346 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:38.293354 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:38.293362 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:38.293370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:38.293378 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:38.293385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:38.293393 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:38.293402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:38.293410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:38.293418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:38.293426 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:38.293434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:38.293442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:38.293450 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:38.293458 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:38.293465 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:38.293475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:38.293499 systemd-journald[216]: Collecting audit messages is disabled. Jan 30 14:10:38.293518 systemd-journald[216]: Journal started Jan 30 14:10:38.293538 systemd-journald[216]: Runtime Journal (/run/log/journal/d775a738134e41329667ab89a733cd36) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:10:38.313861 systemd-modules-load[217]: Inserted module 'overlay' Jan 30 14:10:38.319855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:38.337004 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:38.340259 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:38.364547 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:38.364570 kernel: Bridge firewalling registered Jan 30 14:10:38.360819 systemd-modules-load[217]: Inserted module 'br_netfilter' Jan 30 14:10:38.362510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:38.373762 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:38.380798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:38.392605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:38.415562 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:38.424358 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:38.443573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:38.466404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:38.481379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:38.488936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:38.501413 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:38.513093 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:38.541344 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:38.556451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:38.572960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:38.588165 dracut-cmdline[250]: dracut-dracut-053 Jan 30 14:10:38.588165 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:38.624445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:38.638495 systemd-resolved[255]: Positive Trust Anchors: Jan 30 14:10:38.638505 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:38.638537 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:38.640775 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 30 14:10:38.642549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:38.653384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:38.770210 kernel: SCSI subsystem initialized Jan 30 14:10:38.777209 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:38.788209 kernel: iscsi: registered transport (tcp) Jan 30 14:10:38.805668 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:38.805706 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:38.849351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:38.870354 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:38.901230 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:38.901299 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:38.907472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:38.958199 kernel: raid6: neonx8 gen() 15770 MB/s Jan 30 14:10:38.977190 kernel: raid6: neonx4 gen() 15508 MB/s Jan 30 14:10:38.997184 kernel: raid6: neonx2 gen() 13245 MB/s Jan 30 14:10:39.018185 kernel: raid6: neonx1 gen() 10491 MB/s Jan 30 14:10:39.038184 kernel: raid6: int64x8 gen() 6953 MB/s Jan 30 14:10:39.058186 kernel: raid6: int64x4 gen() 7353 MB/s Jan 30 14:10:39.079186 kernel: raid6: int64x2 gen() 6133 MB/s Jan 30 14:10:39.102312 kernel: raid6: int64x1 gen() 5059 MB/s Jan 30 14:10:39.102330 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Jan 30 14:10:39.125997 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Jan 30 14:10:39.126011 kernel: raid6: using neon recovery algorithm Jan 30 14:10:39.137696 kernel: xor: measuring software checksum speed Jan 30 14:10:39.137711 kernel: 8regs : 19797 MB/sec Jan 30 14:10:39.141015 kernel: 32regs : 19627 MB/sec Jan 30 14:10:39.144456 kernel: arm64_neon : 26936 MB/sec Jan 30 14:10:39.148665 kernel: xor: using function: arm64_neon (26936 MB/sec) Jan 30 14:10:39.200198 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:39.209922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:39.225364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:39.247993 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 30 14:10:39.253495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:39.272421 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:39.296144 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Jan 30 14:10:39.326218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:39.344423 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:39.386370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:39.410471 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:39.434869 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:39.447910 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:39.464404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:39.475329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:39.503508 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:39.511011 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 14:10:39.527020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:39.527215 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:39.546745 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:39.602281 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:10:39.602305 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 14:10:39.602314 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 14:10:39.602324 kernel: scsi host0: storvsc_host_t Jan 30 14:10:39.602506 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 14:10:39.602517 kernel: scsi host1: storvsc_host_t Jan 30 14:10:39.602616 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 14:10:39.602626 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 14:10:39.602650 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:10:39.602660 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 14:10:39.574525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:39.615271 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 14:10:39.574868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:39.647835 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 14:10:39.647874 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 14:10:39.642026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:39.669193 kernel: PTP clock support registered Jan 30 14:10:39.669247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:39.688863 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:39.712340 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 14:10:39.712365 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: VF slot 1 added Jan 30 14:10:39.712754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:39.712942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:39.749948 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 30 14:10:40.142382 kernel: hv_vmbus: registering driver hv_pci Jan 30 14:10:40.142398 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:10:40.142416 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 14:10:40.180315 kernel: hv_vmbus: registering driver hv_utils Jan 30 14:10:40.180333 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:10:40.180453 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 14:10:40.180464 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 14:10:40.180556 kernel: hv_pci 60c8d0a3-f731-448f-8679-d873b3d29bed: PCI VMBus probing: Using version 0x10004 Jan 30 14:10:40.209451 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 14:10:40.209589 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 14:10:40.209608 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 14:10:40.209719 kernel: hv_pci 60c8d0a3-f731-448f-8679-d873b3d29bed: PCI host bridge to bus f731:00 Jan 30 14:10:40.209911 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 14:10:40.209929 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 30 14:10:40.210053 kernel: pci_bus f731:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 14:10:40.210158 kernel: pci_bus f731:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 14:10:40.210410 kernel: pci f731:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 14:10:40.210553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:40.210567 kernel: pci f731:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:40.210726 kernel: pci f731:00:02.0: enabling Extended Tags Jan 30 14:10:40.210829 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 14:10:40.210927 kernel: pci f731:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f731:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 14:10:40.211022 kernel: pci_bus f731:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 14:10:40.211106 kernel: pci f731:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:40.131793 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 30 14:10:40.135428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:40.222277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:40.252427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:40.277363 kernel: mlx5_core f731:00:02.0: enabling device (0000 -> 0002) Jan 30 14:10:40.494609 kernel: mlx5_core f731:00:02.0: firmware version: 16.30.1284 Jan 30 14:10:40.494742 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: VF registering: eth1 Jan 30 14:10:40.494834 kernel: mlx5_core f731:00:02.0 eth1: joined to eth0 Jan 30 14:10:40.494938 kernel: mlx5_core f731:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 14:10:40.305273 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:40.508314 kernel: mlx5_core f731:00:02.0 enP63281s1: renamed from eth1 Jan 30 14:10:40.794172 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 14:10:40.886568 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 14:10:40.905310 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (497) Jan 30 14:10:40.908906 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 14:10:40.915969 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 14:10:40.945500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:40.990449 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (503) Jan 30 14:10:41.003428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:10:41.986244 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:41.987100 disk-uuid[603]: The operation has completed successfully. Jan 30 14:10:42.046800 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:42.046899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:42.075373 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:42.091323 sh[720]: Success Jan 30 14:10:42.123330 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:10:42.299734 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:42.315362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:42.324802 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:42.356752 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:10:42.356802 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:42.363546 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:42.368611 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:42.372884 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:42.665963 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:42.671350 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:42.687500 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:42.695437 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:42.732918 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:42.732976 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:42.737390 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:42.757343 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:42.774484 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:10:42.780247 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:42.786794 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:42.805752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:42.812000 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:42.832451 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:42.859954 systemd-networkd[904]: lo: Link UP Jan 30 14:10:42.859971 systemd-networkd[904]: lo: Gained carrier Jan 30 14:10:42.861693 systemd-networkd[904]: Enumeration completed Jan 30 14:10:42.861926 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:42.870145 systemd[1]: Reached target network.target - Network. Jan 30 14:10:42.878803 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:42.878807 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:42.960724 kernel: mlx5_core f731:00:02.0 enP63281s1: Link up Jan 30 14:10:43.001253 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: Data path switched to VF: enP63281s1 Jan 30 14:10:43.001488 systemd-networkd[904]: enP63281s1: Link UP Jan 30 14:10:43.001736 systemd-networkd[904]: eth0: Link UP Jan 30 14:10:43.002115 systemd-networkd[904]: eth0: Gained carrier Jan 30 14:10:43.002126 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:43.014845 systemd-networkd[904]: enP63281s1: Gained carrier Jan 30 14:10:43.038268 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:10:43.803142 ignition[899]: Ignition 2.19.0 Jan 30 14:10:43.803158 ignition[899]: Stage: fetch-offline Jan 30 14:10:43.807905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:43.803207 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.803216 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:43.803355 ignition[899]: parsed url from cmdline: "" Jan 30 14:10:43.803359 ignition[899]: no config URL provided Jan 30 14:10:43.803363 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:43.834576 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:10:43.803371 ignition[899]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:43.803376 ignition[899]: failed to fetch config: resource requires networking Jan 30 14:10:43.803622 ignition[899]: Ignition finished successfully Jan 30 14:10:43.858527 ignition[912]: Ignition 2.19.0 Jan 30 14:10:43.858533 ignition[912]: Stage: fetch Jan 30 14:10:43.858734 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.858743 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:43.858843 ignition[912]: parsed url from cmdline: "" Jan 30 14:10:43.858849 ignition[912]: no config URL provided Jan 30 14:10:43.858854 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:43.858861 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:43.858894 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 14:10:43.964886 ignition[912]: GET result: OK Jan 30 14:10:43.964979 ignition[912]: config has been read from IMDS userdata Jan 30 14:10:43.965021 ignition[912]: parsing config with SHA512: c80e00135ae9849cb29cd85cd417e52d8a112a9d29830e7f9aa5f642fcecbf87cda4da51677b3f173cef68fef178810e927723622f78c54b90c384a69e8e9c4b Jan 30 14:10:43.968855 unknown[912]: fetched base config from "system" Jan 30 14:10:43.969374 ignition[912]: fetch: fetch complete Jan 30 14:10:43.968863 unknown[912]: fetched base config from "system" Jan 30 14:10:43.969379 ignition[912]: fetch: fetch passed Jan 30 14:10:43.968868 unknown[912]: fetched user config from "azure" Jan 30 14:10:43.969430 ignition[912]: Ignition finished successfully Jan 30 14:10:43.974622 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:10:43.994564 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:44.021031 ignition[919]: Ignition 2.19.0 Jan 30 14:10:44.021044 ignition[919]: Stage: kargs Jan 30 14:10:44.025695 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:44.021255 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:44.021265 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:44.046390 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:44.022445 ignition[919]: kargs: kargs passed Jan 30 14:10:44.022504 ignition[919]: Ignition finished successfully Jan 30 14:10:44.064374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:44.060920 ignition[925]: Ignition 2.19.0 Jan 30 14:10:44.072752 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:44.060928 ignition[925]: Stage: disks Jan 30 14:10:44.083267 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:44.061119 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:44.096924 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:44.061129 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:44.106949 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:44.062231 ignition[925]: disks: disks passed Jan 30 14:10:44.120295 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:44.062285 ignition[925]: Ignition finished successfully Jan 30 14:10:44.153497 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:44.167329 systemd-networkd[904]: eth0: Gained IPv6LL Jan 30 14:10:44.220386 systemd-networkd[904]: enP63281s1: Gained IPv6LL Jan 30 14:10:44.239077 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 14:10:44.248598 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:44.265453 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:44.324238 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:44.324355 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:44.329152 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:44.379345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:44.390208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:44.399444 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:44.412841 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:44.461668 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) Jan 30 14:10:44.461701 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:44.461712 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:44.461724 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:44.412875 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:44.427893 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:44.468474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:44.493243 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:44.494246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:44.887581 coreos-metadata[946]: Jan 30 14:10:44.887 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:10:44.897374 coreos-metadata[946]: Jan 30 14:10:44.897 INFO Fetch successful Jan 30 14:10:44.897374 coreos-metadata[946]: Jan 30 14:10:44.897 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:10:44.916397 coreos-metadata[946]: Jan 30 14:10:44.912 INFO Fetch successful Jan 30 14:10:44.916397 coreos-metadata[946]: Jan 30 14:10:44.912 INFO wrote hostname ci-4081.3.0-a-1247579205 to /sysroot/etc/hostname Jan 30 14:10:44.916781 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:45.187197 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:45.233215 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:45.239435 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:45.247560 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:46.062946 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:46.078511 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:46.086436 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:46.108357 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:46.102849 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:46.135856 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:46.146625 ignition[1063]: INFO : Ignition 2.19.0 Jan 30 14:10:46.146625 ignition[1063]: INFO : Stage: mount Jan 30 14:10:46.146625 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:46.146625 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:46.146625 ignition[1063]: INFO : mount: mount passed Jan 30 14:10:46.146625 ignition[1063]: INFO : Ignition finished successfully Jan 30 14:10:46.149209 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:46.175466 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:46.200560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:46.236752 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1074) Jan 30 14:10:46.236814 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:46.243299 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:46.247935 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:46.255246 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:46.256915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:46.281293 ignition[1092]: INFO : Ignition 2.19.0 Jan 30 14:10:46.281293 ignition[1092]: INFO : Stage: files Jan 30 14:10:46.289512 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:46.289512 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:46.289512 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:46.333731 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:46.333731 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:46.390004 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:46.397429 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:46.397429 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:46.390448 unknown[1092]: wrote ssh authorized keys file for user: core Jan 30 14:10:46.418438 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:46.429204 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:10:46.599103 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:10:46.736820 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 14:10:47.199251 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:10:47.398502 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:47.398502 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: files passed Jan 30 14:10:47.418903 ignition[1092]: INFO : Ignition finished successfully Jan 30 14:10:47.439785 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:47.476558 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:47.495422 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:47.504407 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:47.504521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:47.550511 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.558682 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.558682 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.551909 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:47.565787 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:47.602442 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:47.634711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:47.634864 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:47.646824 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:47.658495 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:47.669491 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:47.687550 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:47.710923 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:47.729548 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:47.748125 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:47.754759 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:47.766666 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:47.777480 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:47.777656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:47.793329 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:47.804952 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:47.814903 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:47.825164 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:47.837265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:47.849467 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:47.860759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:47.872910 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:47.885059 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:47.896148 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:47.905711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:47.905883 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:47.920572 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:47.931721 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:47.943498 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:47.943623 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:47.956168 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:47.956364 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:47.973944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:47.974120 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:47.985476 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:47.985621 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:47.997024 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:47.997255 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:48.029378 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:48.039847 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:48.077039 ignition[1144]: INFO : Ignition 2.19.0 Jan 30 14:10:48.077039 ignition[1144]: INFO : Stage: umount Jan 30 14:10:48.077039 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:48.077039 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:48.077039 ignition[1144]: INFO : umount: umount passed Jan 30 14:10:48.077039 ignition[1144]: INFO : Ignition finished successfully Jan 30 14:10:48.040025 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:48.064438 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:48.069614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:48.069760 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:48.083776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:48.083895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:48.095971 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:48.098159 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:48.111288 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:48.111399 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:48.121190 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:48.121250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:48.138639 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:48.138707 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:48.150044 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:10:48.150101 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:10:48.161926 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:48.173178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:48.173275 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:48.184522 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:48.194987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:48.195044 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:48.208594 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:48.218432 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:48.229232 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:48.229293 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:48.238986 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:48.239030 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:48.252346 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:48.252412 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:48.263508 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:48.263558 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:48.275652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:48.285986 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:48.297883 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:48.302533 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:48.542634 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: Data path switched from VF: enP63281s1 Jan 30 14:10:48.302681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:48.307856 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 30 14:10:48.318326 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:48.318538 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:48.338585 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:48.338647 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:48.373727 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:48.385150 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:48.385231 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:48.402362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:48.402418 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:48.415622 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:48.415678 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:48.429067 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:48.429117 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:48.439928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:48.483102 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:48.483405 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:48.498145 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:48.498211 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:48.525633 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:48.525683 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:48.537492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:48.537549 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:48.553899 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:48.553974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:48.571439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:48.571496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:48.623472 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:48.636384 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:48.636463 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:48.651659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:48.651719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:48.665554 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:48.665654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:48.677017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:48.677104 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:48.751128 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:48.751290 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:48.758644 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:48.770311 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:48.770385 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:48.795481 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:48.886327 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:48.816860 systemd[1]: Switching root. Jan 30 14:10:48.889876 systemd-journald[216]: Journal stopped Jan 30 14:10:38.291114 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:10:38.291135 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:10:38.291143 kernel: KASLR enabled Jan 30 14:10:38.291149 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 14:10:38.291156 kernel: printk: bootconsole [pl11] enabled Jan 30 14:10:38.291162 kernel: efi: EFI v2.7 by EDK II Jan 30 14:10:38.291169 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 30 14:10:38.291185 kernel: random: crng init done Jan 30 14:10:38.291192 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:38.291198 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 14:10:38.291204 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291210 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291218 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 14:10:38.291224 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291232 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291238 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291244 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291252 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291259 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291265 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 14:10:38.291271 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:38.291278 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 14:10:38.291284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 14:10:38.291290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 14:10:38.291297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 14:10:38.291303 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 14:10:38.291309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 14:10:38.291316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 14:10:38.291324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 14:10:38.291330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 14:10:38.291336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 14:10:38.291343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 14:10:38.291349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 14:10:38.291355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 14:10:38.291361 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 14:10:38.291368 kernel: Zone ranges: Jan 30 14:10:38.291374 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 14:10:38.291380 kernel: DMA32 empty Jan 30 14:10:38.291387 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:38.291393 kernel: Movable zone start for each node Jan 30 14:10:38.291403 kernel: Early memory node ranges Jan 30 14:10:38.291410 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 14:10:38.291417 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 30 14:10:38.291424 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 14:10:38.291430 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 14:10:38.291439 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 14:10:38.291445 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 14:10:38.291452 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:38.291459 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 14:10:38.291466 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 14:10:38.291472 kernel: psci: probing for conduit method from ACPI. Jan 30 14:10:38.291479 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:10:38.291486 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:10:38.291493 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 14:10:38.291499 kernel: psci: SMC Calling Convention v1.4 Jan 30 14:10:38.291506 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 14:10:38.291513 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 14:10:38.291521 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:10:38.291528 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:10:38.291534 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:10:38.291541 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:10:38.291548 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:10:38.291554 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:10:38.291561 kernel: CPU features: detected: Spectre-BHB Jan 30 14:10:38.291568 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:10:38.291574 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:10:38.291581 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:10:38.291587 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 14:10:38.291596 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:10:38.291602 kernel: alternatives: applying boot alternatives Jan 30 14:10:38.291610 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:38.291618 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:38.291624 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:10:38.291631 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:38.291638 kernel: Fallback order for Node 0: 0 Jan 30 14:10:38.291645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 14:10:38.291651 kernel: Policy zone: Normal Jan 30 14:10:38.291658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:38.291665 kernel: software IO TLB: area num 2. Jan 30 14:10:38.291673 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 30 14:10:38.291680 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 30 14:10:38.291687 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:10:38.291694 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:38.291701 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:38.291708 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:10:38.291715 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:38.291722 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:38.291729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:38.291736 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:10:38.291742 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:10:38.291751 kernel: GICv3: 960 SPIs implemented Jan 30 14:10:38.291757 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:10:38.291764 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:10:38.291771 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:10:38.291778 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 14:10:38.291784 kernel: ITS: No ITS available, not enabling LPIs Jan 30 14:10:38.291791 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:38.291798 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:38.291805 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:10:38.291812 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:10:38.291819 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:10:38.291827 kernel: Console: colour dummy device 80x25 Jan 30 14:10:38.291834 kernel: printk: console [tty1] enabled Jan 30 14:10:38.291841 kernel: ACPI: Core revision 20230628 Jan 30 14:10:38.291848 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:10:38.291855 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:38.291862 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:38.291869 kernel: landlock: Up and running. Jan 30 14:10:38.291876 kernel: SELinux: Initializing. Jan 30 14:10:38.291883 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.291889 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.291898 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:38.291905 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:38.291912 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 14:10:38.291919 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 14:10:38.291926 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 14:10:38.291933 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:38.291940 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:38.291953 kernel: Remapping and enabling EFI services. Jan 30 14:10:38.291964 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:38.291972 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:10:38.291981 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 14:10:38.291992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:38.292001 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:10:38.292009 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:10:38.292017 kernel: SMP: Total of 2 processors activated. Jan 30 14:10:38.292026 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:10:38.292037 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 14:10:38.292046 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:10:38.292054 kernel: CPU features: detected: CRC32 instructions Jan 30 14:10:38.292063 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:10:38.292072 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:10:38.292080 kernel: CPU features: detected: Privileged Access Never Jan 30 14:10:38.292088 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:10:38.292097 kernel: alternatives: applying system-wide alternatives Jan 30 14:10:38.292106 kernel: devtmpfs: initialized Jan 30 14:10:38.292116 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:38.292124 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:10:38.292131 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:38.292138 kernel: SMBIOS 3.1.0 present. Jan 30 14:10:38.292147 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 14:10:38.292155 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:38.292164 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:10:38.292181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:10:38.292190 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:10:38.292201 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:38.292209 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:38.292218 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:38.292227 kernel: cpuidle: using governor menu Jan 30 14:10:38.292235 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:10:38.292243 kernel: ASID allocator initialised with 32768 entries Jan 30 14:10:38.292250 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:38.292257 kernel: Serial: AMBA PL011 UART driver Jan 30 14:10:38.292265 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:10:38.292274 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:10:38.292281 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:10:38.292289 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:38.292296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:38.292303 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:10:38.292311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:10:38.292318 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:38.292325 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:38.292333 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:10:38.292341 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:10:38.292349 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:38.292356 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:38.292363 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:38.292370 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:38.292377 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:10:38.292385 kernel: ACPI: Interpreter enabled Jan 30 14:10:38.292392 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:10:38.292399 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:10:38.292408 kernel: printk: console [ttyAMA0] enabled Jan 30 14:10:38.292415 kernel: printk: bootconsole [pl11] disabled Jan 30 14:10:38.292422 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 14:10:38.292429 kernel: iommu: Default domain type: Translated Jan 30 14:10:38.292437 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:10:38.292444 kernel: efivars: Registered efivars operations Jan 30 14:10:38.292451 kernel: vgaarb: loaded Jan 30 14:10:38.292459 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:10:38.292466 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:38.292475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:38.292482 kernel: pnp: PnP ACPI init Jan 30 14:10:38.292490 kernel: pnp: PnP ACPI: found 0 devices Jan 30 14:10:38.292497 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:38.292504 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:38.292512 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:10:38.292519 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:38.292526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:10:38.292533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:10:38.292543 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:10:38.292550 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.292557 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:38.292565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:38.292572 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:10:38.292579 kernel: kvm [1]: HYP mode not available Jan 30 14:10:38.292586 kernel: Initialise system trusted keyrings Jan 30 14:10:38.292594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:10:38.292601 kernel: Key type asymmetric registered Jan 30 14:10:38.292609 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:38.292617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:10:38.292624 kernel: io scheduler mq-deadline registered Jan 30 14:10:38.292631 kernel: io scheduler kyber registered Jan 30 14:10:38.292638 kernel: io scheduler bfq registered Jan 30 14:10:38.292645 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:38.292653 kernel: thunder_xcv, ver 1.0 Jan 30 14:10:38.292660 kernel: thunder_bgx, ver 1.0 Jan 30 14:10:38.292667 kernel: nicpf, ver 1.0 Jan 30 14:10:38.292675 kernel: nicvf, ver 1.0 Jan 30 14:10:38.292818 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:10:38.292891 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:10:37 UTC (1738246237) Jan 30 14:10:38.292901 kernel: efifb: probing for efifb Jan 30 14:10:38.292908 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 14:10:38.292916 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 14:10:38.292923 kernel: efifb: scrolling: redraw Jan 30 14:10:38.292931 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 14:10:38.292940 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:38.292948 kernel: fb0: EFI VGA frame buffer device Jan 30 14:10:38.292955 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 14:10:38.292962 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:38.292969 kernel: No ACPI PMU IRQ for CPU0 Jan 30 14:10:38.292976 kernel: No ACPI PMU IRQ for CPU1 Jan 30 14:10:38.292984 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 14:10:38.292991 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:10:38.292998 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:10:38.293007 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:38.293014 kernel: Segment Routing with IPv6 Jan 30 14:10:38.293021 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:38.293028 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:38.293035 kernel: Key type dns_resolver registered Jan 30 14:10:38.293043 kernel: registered taskstats version 1 Jan 30 14:10:38.293050 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:38.293057 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:10:38.293064 kernel: Key type .fscrypt registered Jan 30 14:10:38.293073 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:38.293080 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:10:38.293088 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:38.293095 kernel: ima: No architecture policies found Jan 30 14:10:38.293102 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:10:38.293109 kernel: clk: Disabling unused clocks Jan 30 14:10:38.293117 kernel: Freeing unused kernel memory: 39360K Jan 30 14:10:38.293124 kernel: Run /init as init process Jan 30 14:10:38.293131 kernel: with arguments: Jan 30 14:10:38.293140 kernel: /init Jan 30 14:10:38.293147 kernel: with environment: Jan 30 14:10:38.293154 kernel: HOME=/ Jan 30 14:10:38.293161 kernel: TERM=linux Jan 30 14:10:38.293168 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:38.293188 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:38.293198 systemd[1]: Detected virtualization microsoft. Jan 30 14:10:38.293206 systemd[1]: Detected architecture arm64. Jan 30 14:10:38.293215 systemd[1]: Running in initrd. Jan 30 14:10:38.293223 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:38.293230 systemd[1]: Hostname set to . Jan 30 14:10:38.293238 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:38.293246 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:38.293253 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:38.293261 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:38.293270 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:38.293279 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:38.293287 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:38.293295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:38.293305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:38.293313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:38.293321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:38.293330 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:38.293338 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:38.293346 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:38.293354 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:38.293362 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:38.293370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:38.293378 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:38.293385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:38.293393 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:38.293402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:38.293410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:38.293418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:38.293426 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:38.293434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:38.293442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:38.293450 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:38.293458 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:38.293465 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:38.293475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:38.293499 systemd-journald[216]: Collecting audit messages is disabled. Jan 30 14:10:38.293518 systemd-journald[216]: Journal started Jan 30 14:10:38.293538 systemd-journald[216]: Runtime Journal (/run/log/journal/d775a738134e41329667ab89a733cd36) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:10:38.313861 systemd-modules-load[217]: Inserted module 'overlay' Jan 30 14:10:38.319855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:38.337004 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:38.340259 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:38.364547 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:38.364570 kernel: Bridge firewalling registered Jan 30 14:10:38.360819 systemd-modules-load[217]: Inserted module 'br_netfilter' Jan 30 14:10:38.362510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:38.373762 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:38.380798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:38.392605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:38.415562 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:38.424358 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:38.443573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:38.466404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:38.481379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:38.488936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:38.501413 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:38.513093 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:38.541344 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:38.556451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:38.572960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:38.588165 dracut-cmdline[250]: dracut-dracut-053 Jan 30 14:10:38.588165 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:38.624445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:38.638495 systemd-resolved[255]: Positive Trust Anchors: Jan 30 14:10:38.638505 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:38.638537 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:38.640775 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 30 14:10:38.642549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:38.653384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:38.770210 kernel: SCSI subsystem initialized Jan 30 14:10:38.777209 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:38.788209 kernel: iscsi: registered transport (tcp) Jan 30 14:10:38.805668 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:38.805706 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:38.849351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:38.870354 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:38.901230 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:38.901299 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:38.907472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:38.958199 kernel: raid6: neonx8 gen() 15770 MB/s Jan 30 14:10:38.977190 kernel: raid6: neonx4 gen() 15508 MB/s Jan 30 14:10:38.997184 kernel: raid6: neonx2 gen() 13245 MB/s Jan 30 14:10:39.018185 kernel: raid6: neonx1 gen() 10491 MB/s Jan 30 14:10:39.038184 kernel: raid6: int64x8 gen() 6953 MB/s Jan 30 14:10:39.058186 kernel: raid6: int64x4 gen() 7353 MB/s Jan 30 14:10:39.079186 kernel: raid6: int64x2 gen() 6133 MB/s Jan 30 14:10:39.102312 kernel: raid6: int64x1 gen() 5059 MB/s Jan 30 14:10:39.102330 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Jan 30 14:10:39.125997 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Jan 30 14:10:39.126011 kernel: raid6: using neon recovery algorithm Jan 30 14:10:39.137696 kernel: xor: measuring software checksum speed Jan 30 14:10:39.137711 kernel: 8regs : 19797 MB/sec Jan 30 14:10:39.141015 kernel: 32regs : 19627 MB/sec Jan 30 14:10:39.144456 kernel: arm64_neon : 26936 MB/sec Jan 30 14:10:39.148665 kernel: xor: using function: arm64_neon (26936 MB/sec) Jan 30 14:10:39.200198 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:39.209922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:39.225364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:39.247993 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 30 14:10:39.253495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:39.272421 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:39.296144 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Jan 30 14:10:39.326218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:39.344423 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:39.386370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:39.410471 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:39.434869 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:39.447910 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:39.464404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:39.475329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:39.503508 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:39.511011 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 14:10:39.527020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:39.527215 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:39.546745 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:39.602281 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:10:39.602305 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 14:10:39.602314 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 14:10:39.602324 kernel: scsi host0: storvsc_host_t Jan 30 14:10:39.602506 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 14:10:39.602517 kernel: scsi host1: storvsc_host_t Jan 30 14:10:39.602616 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 14:10:39.602626 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 14:10:39.602650 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:10:39.602660 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 14:10:39.574525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:39.615271 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 14:10:39.574868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:39.647835 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 14:10:39.647874 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 14:10:39.642026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:39.669193 kernel: PTP clock support registered Jan 30 14:10:39.669247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:39.688863 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:39.712340 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 14:10:39.712365 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: VF slot 1 added Jan 30 14:10:39.712754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:39.712942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:39.749948 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 30 14:10:40.142382 kernel: hv_vmbus: registering driver hv_pci Jan 30 14:10:40.142398 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:10:40.142416 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 14:10:40.180315 kernel: hv_vmbus: registering driver hv_utils Jan 30 14:10:40.180333 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:10:40.180453 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 14:10:40.180464 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 14:10:40.180556 kernel: hv_pci 60c8d0a3-f731-448f-8679-d873b3d29bed: PCI VMBus probing: Using version 0x10004 Jan 30 14:10:40.209451 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 14:10:40.209589 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 14:10:40.209608 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 14:10:40.209719 kernel: hv_pci 60c8d0a3-f731-448f-8679-d873b3d29bed: PCI host bridge to bus f731:00 Jan 30 14:10:40.209911 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 14:10:40.209929 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 30 14:10:40.210053 kernel: pci_bus f731:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 14:10:40.210158 kernel: pci_bus f731:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 14:10:40.210410 kernel: pci f731:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 14:10:40.210553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:40.210567 kernel: pci f731:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:40.210726 kernel: pci f731:00:02.0: enabling Extended Tags Jan 30 14:10:40.210829 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 14:10:40.210927 kernel: pci f731:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f731:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 14:10:40.211022 kernel: pci_bus f731:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 14:10:40.211106 kernel: pci f731:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:40.131793 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 30 14:10:40.135428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:40.222277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:40.252427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:40.277363 kernel: mlx5_core f731:00:02.0: enabling device (0000 -> 0002) Jan 30 14:10:40.494609 kernel: mlx5_core f731:00:02.0: firmware version: 16.30.1284 Jan 30 14:10:40.494742 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: VF registering: eth1 Jan 30 14:10:40.494834 kernel: mlx5_core f731:00:02.0 eth1: joined to eth0 Jan 30 14:10:40.494938 kernel: mlx5_core f731:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 14:10:40.305273 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:40.508314 kernel: mlx5_core f731:00:02.0 enP63281s1: renamed from eth1 Jan 30 14:10:40.794172 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 14:10:40.886568 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 14:10:40.905310 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (497) Jan 30 14:10:40.908906 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 14:10:40.915969 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 14:10:40.945500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:40.990449 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (503) Jan 30 14:10:41.003428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:10:41.986244 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:41.987100 disk-uuid[603]: The operation has completed successfully. Jan 30 14:10:42.046800 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:42.046899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:42.075373 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:42.091323 sh[720]: Success Jan 30 14:10:42.123330 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:10:42.299734 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:42.315362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:42.324802 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:42.356752 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:10:42.356802 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:42.363546 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:42.368611 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:42.372884 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:42.665963 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:42.671350 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:42.687500 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:42.695437 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:42.732918 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:42.732976 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:42.737390 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:42.757343 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:42.774484 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:10:42.780247 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:42.786794 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:42.805752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:42.812000 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:42.832451 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:42.859954 systemd-networkd[904]: lo: Link UP Jan 30 14:10:42.859971 systemd-networkd[904]: lo: Gained carrier Jan 30 14:10:42.861693 systemd-networkd[904]: Enumeration completed Jan 30 14:10:42.861926 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:42.870145 systemd[1]: Reached target network.target - Network. Jan 30 14:10:42.878803 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:42.878807 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:42.960724 kernel: mlx5_core f731:00:02.0 enP63281s1: Link up Jan 30 14:10:43.001253 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: Data path switched to VF: enP63281s1 Jan 30 14:10:43.001488 systemd-networkd[904]: enP63281s1: Link UP Jan 30 14:10:43.001736 systemd-networkd[904]: eth0: Link UP Jan 30 14:10:43.002115 systemd-networkd[904]: eth0: Gained carrier Jan 30 14:10:43.002126 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:43.014845 systemd-networkd[904]: enP63281s1: Gained carrier Jan 30 14:10:43.038268 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:10:43.803142 ignition[899]: Ignition 2.19.0 Jan 30 14:10:43.803158 ignition[899]: Stage: fetch-offline Jan 30 14:10:43.807905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:43.803207 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.803216 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:43.803355 ignition[899]: parsed url from cmdline: "" Jan 30 14:10:43.803359 ignition[899]: no config URL provided Jan 30 14:10:43.803363 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:43.834576 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:10:43.803371 ignition[899]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:43.803376 ignition[899]: failed to fetch config: resource requires networking Jan 30 14:10:43.803622 ignition[899]: Ignition finished successfully Jan 30 14:10:43.858527 ignition[912]: Ignition 2.19.0 Jan 30 14:10:43.858533 ignition[912]: Stage: fetch Jan 30 14:10:43.858734 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.858743 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:43.858843 ignition[912]: parsed url from cmdline: "" Jan 30 14:10:43.858849 ignition[912]: no config URL provided Jan 30 14:10:43.858854 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:43.858861 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:43.858894 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 14:10:43.964886 ignition[912]: GET result: OK Jan 30 14:10:43.964979 ignition[912]: config has been read from IMDS userdata Jan 30 14:10:43.965021 ignition[912]: parsing config with SHA512: c80e00135ae9849cb29cd85cd417e52d8a112a9d29830e7f9aa5f642fcecbf87cda4da51677b3f173cef68fef178810e927723622f78c54b90c384a69e8e9c4b Jan 30 14:10:43.968855 unknown[912]: fetched base config from "system" Jan 30 14:10:43.969374 ignition[912]: fetch: fetch complete Jan 30 14:10:43.968863 unknown[912]: fetched base config from "system" Jan 30 14:10:43.969379 ignition[912]: fetch: fetch passed Jan 30 14:10:43.968868 unknown[912]: fetched user config from "azure" Jan 30 14:10:43.969430 ignition[912]: Ignition finished successfully Jan 30 14:10:43.974622 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:10:43.994564 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:44.021031 ignition[919]: Ignition 2.19.0 Jan 30 14:10:44.021044 ignition[919]: Stage: kargs Jan 30 14:10:44.025695 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:44.021255 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:44.021265 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:44.046390 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:44.022445 ignition[919]: kargs: kargs passed Jan 30 14:10:44.022504 ignition[919]: Ignition finished successfully Jan 30 14:10:44.064374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:44.060920 ignition[925]: Ignition 2.19.0 Jan 30 14:10:44.072752 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:44.060928 ignition[925]: Stage: disks Jan 30 14:10:44.083267 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:44.061119 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:44.096924 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:44.061129 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:44.106949 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:44.062231 ignition[925]: disks: disks passed Jan 30 14:10:44.120295 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:44.062285 ignition[925]: Ignition finished successfully Jan 30 14:10:44.153497 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:44.167329 systemd-networkd[904]: eth0: Gained IPv6LL Jan 30 14:10:44.220386 systemd-networkd[904]: enP63281s1: Gained IPv6LL Jan 30 14:10:44.239077 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 14:10:44.248598 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:44.265453 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:44.324238 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:44.324355 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:44.329152 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:44.379345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:44.390208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:44.399444 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:44.412841 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:44.461668 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) Jan 30 14:10:44.461701 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:44.461712 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:44.461724 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:44.412875 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:44.427893 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:44.468474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:44.493243 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:44.494246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:44.887581 coreos-metadata[946]: Jan 30 14:10:44.887 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:10:44.897374 coreos-metadata[946]: Jan 30 14:10:44.897 INFO Fetch successful Jan 30 14:10:44.897374 coreos-metadata[946]: Jan 30 14:10:44.897 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:10:44.916397 coreos-metadata[946]: Jan 30 14:10:44.912 INFO Fetch successful Jan 30 14:10:44.916397 coreos-metadata[946]: Jan 30 14:10:44.912 INFO wrote hostname ci-4081.3.0-a-1247579205 to /sysroot/etc/hostname Jan 30 14:10:44.916781 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:45.187197 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:45.233215 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:45.239435 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:45.247560 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:46.062946 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:46.078511 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:46.086436 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:46.108357 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:46.102849 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:46.135856 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:46.146625 ignition[1063]: INFO : Ignition 2.19.0 Jan 30 14:10:46.146625 ignition[1063]: INFO : Stage: mount Jan 30 14:10:46.146625 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:46.146625 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:46.146625 ignition[1063]: INFO : mount: mount passed Jan 30 14:10:46.146625 ignition[1063]: INFO : Ignition finished successfully Jan 30 14:10:46.149209 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:46.175466 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:46.200560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:46.236752 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1074) Jan 30 14:10:46.236814 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:46.243299 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:46.247935 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:46.255246 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:46.256915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:46.281293 ignition[1092]: INFO : Ignition 2.19.0 Jan 30 14:10:46.281293 ignition[1092]: INFO : Stage: files Jan 30 14:10:46.289512 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:46.289512 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:46.289512 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:46.333731 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:46.333731 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:46.390004 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:46.397429 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:46.397429 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:46.390448 unknown[1092]: wrote ssh authorized keys file for user: core Jan 30 14:10:46.418438 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:46.429204 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:10:46.599103 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:10:46.736820 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:46.747905 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 14:10:47.199251 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:10:47.398502 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:10:47.398502 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:47.418903 ignition[1092]: INFO : files: files passed Jan 30 14:10:47.418903 ignition[1092]: INFO : Ignition finished successfully Jan 30 14:10:47.439785 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:47.476558 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:47.495422 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:47.504407 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:47.504521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:47.550511 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.558682 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.558682 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.551909 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:47.565787 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:47.602442 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:47.634711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:47.634864 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:47.646824 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:47.658495 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:47.669491 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:47.687550 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:47.710923 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:47.729548 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:47.748125 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:47.754759 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:47.766666 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:47.777480 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:47.777656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:47.793329 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:47.804952 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:47.814903 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:47.825164 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:47.837265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:47.849467 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:47.860759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:47.872910 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:47.885059 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:47.896148 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:47.905711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:47.905883 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:47.920572 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:47.931721 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:47.943498 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:47.943623 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:47.956168 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:47.956364 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:47.973944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:47.974120 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:47.985476 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:47.985621 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:47.997024 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:47.997255 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:48.029378 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:48.039847 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:48.077039 ignition[1144]: INFO : Ignition 2.19.0 Jan 30 14:10:48.077039 ignition[1144]: INFO : Stage: umount Jan 30 14:10:48.077039 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:48.077039 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:48.077039 ignition[1144]: INFO : umount: umount passed Jan 30 14:10:48.077039 ignition[1144]: INFO : Ignition finished successfully Jan 30 14:10:48.040025 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:48.064438 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:48.069614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:48.069760 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:48.083776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:48.083895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:48.095971 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:48.098159 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:48.111288 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:48.111399 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:48.121190 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:48.121250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:48.138639 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:48.138707 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:48.150044 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:10:48.150101 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:10:48.161926 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:48.173178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:48.173275 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:48.184522 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:48.194987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:48.195044 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:48.208594 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:48.218432 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:48.229232 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:48.229293 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:48.238986 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:48.239030 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:48.252346 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:48.252412 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:48.263508 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:48.263558 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:48.275652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:48.285986 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:48.297883 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:48.302533 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:48.542634 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: Data path switched from VF: enP63281s1 Jan 30 14:10:48.302681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:48.307856 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 30 14:10:48.318326 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:48.318538 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:48.338585 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:48.338647 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:48.373727 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:48.385150 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:48.385231 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:48.402362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:48.402418 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:48.415622 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:48.415678 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:48.429067 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:48.429117 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:48.439928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:48.483102 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:48.483405 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:48.498145 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:48.498211 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:48.525633 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:48.525683 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:48.537492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:48.537549 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:48.553899 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:48.553974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:48.571439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:48.571496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:48.623472 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:48.636384 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:48.636463 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:48.651659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:48.651719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:48.665554 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:48.665654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:48.677017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:48.677104 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:48.751128 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:48.751290 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:48.758644 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:48.770311 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:48.770385 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:48.795481 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:48.886327 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:48.816860 systemd[1]: Switching root. Jan 30 14:10:48.889876 systemd-journald[216]: Journal stopped Jan 30 14:10:53.006573 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:10:53.006597 kernel: SELinux: policy capability open_perms=1 Jan 30 14:10:53.006607 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:10:53.006614 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:10:53.006624 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:10:53.006632 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:10:53.006641 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:10:53.006649 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:10:53.006660 kernel: audit: type=1403 audit(1738246249.920:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:10:53.006669 systemd[1]: Successfully loaded SELinux policy in 106.566ms. Jan 30 14:10:53.006681 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.527ms. Jan 30 14:10:53.006691 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:53.006700 systemd[1]: Detected virtualization microsoft. Jan 30 14:10:53.006709 systemd[1]: Detected architecture arm64. Jan 30 14:10:53.006718 systemd[1]: Detected first boot. Jan 30 14:10:53.006729 systemd[1]: Hostname set to . Jan 30 14:10:53.006739 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:53.006748 zram_generator::config[1185]: No configuration found. Jan 30 14:10:53.006758 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:10:53.006767 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:10:53.006776 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:10:53.006785 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:10:53.006796 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:10:53.006805 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:10:53.006815 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:10:53.006824 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:10:53.006833 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:10:53.006843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:10:53.006852 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:10:53.006864 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:10:53.006873 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:53.006882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:53.006892 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:10:53.006901 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:10:53.006910 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:10:53.006920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:53.006929 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 14:10:53.006940 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:53.006949 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:10:53.006959 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:10:53.006970 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:53.006980 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:10:53.006997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:53.007006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:53.007016 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:53.007026 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:53.007036 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:10:53.007045 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:10:53.007055 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:53.007064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:53.007076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:53.007086 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:10:53.007096 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:10:53.007106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:10:53.007115 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:10:53.007125 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:10:53.007135 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:10:53.007144 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:10:53.007156 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:10:53.007166 systemd[1]: Reached target machines.target - Containers. Jan 30 14:10:53.007176 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:10:53.007186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:53.007195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:53.007205 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:10:53.007215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:53.007233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:53.007246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:53.007255 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:10:53.007265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:53.007275 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:10:53.007285 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:10:53.007295 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:10:53.007305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:10:53.007314 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:10:53.007325 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:53.007335 kernel: fuse: init (API version 7.39) Jan 30 14:10:53.007343 kernel: ACPI: bus type drm_connector registered Jan 30 14:10:53.007352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:53.007379 systemd-journald[1288]: Collecting audit messages is disabled. Jan 30 14:10:53.007403 systemd-journald[1288]: Journal started Jan 30 14:10:53.007423 systemd-journald[1288]: Runtime Journal (/run/log/journal/0156a76167a04e7f8320327745886f4b) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:10:51.852517 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:10:52.074215 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:10:52.074606 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:10:52.074900 systemd[1]: systemd-journald.service: Consumed 3.134s CPU time. Jan 30 14:10:53.017263 kernel: loop: module loaded Jan 30 14:10:53.035409 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:10:53.053100 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:10:53.066761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:53.075506 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:10:53.075549 systemd[1]: Stopped verity-setup.service. Jan 30 14:10:53.092404 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:53.093178 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:10:53.098887 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:10:53.104842 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:10:53.110045 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:10:53.116517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:10:53.122606 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:10:53.128213 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:10:53.134788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:53.141671 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:10:53.141803 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:10:53.148310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:53.148441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:53.154723 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:53.154865 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:53.160849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:53.160970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:53.167681 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:10:53.167818 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:10:53.173902 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:53.174043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:53.180283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:53.186591 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:10:53.193943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:10:53.200787 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:53.218142 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:10:53.230340 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:10:53.239397 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:10:53.245666 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:10:53.245705 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:53.252046 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:10:53.260426 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:10:53.267474 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:10:53.273367 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:53.293408 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:10:53.300366 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:10:53.306466 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:53.307519 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:10:53.313345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:53.315525 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:53.322408 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:10:53.334426 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:10:53.346472 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:10:53.356009 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:10:53.365194 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:10:53.371801 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:10:53.374939 systemd-journald[1288]: Time spent on flushing to /var/log/journal/0156a76167a04e7f8320327745886f4b is 23.968ms for 897 entries. Jan 30 14:10:53.374939 systemd-journald[1288]: System Journal (/var/log/journal/0156a76167a04e7f8320327745886f4b) is 8.0M, max 2.6G, 2.6G free. Jan 30 14:10:53.436986 systemd-journald[1288]: Received client request to flush runtime journal. Jan 30 14:10:53.437026 kernel: loop0: detected capacity change from 0 to 189592 Jan 30 14:10:53.385160 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:10:53.400775 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:10:53.414511 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:10:53.423027 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:10:53.439254 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:10:53.469785 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:10:53.476709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:53.507106 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:10:53.514113 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:10:53.514245 kernel: loop1: detected capacity change from 0 to 31320 Jan 30 14:10:53.771738 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:10:53.782402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:53.818345 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jan 30 14:10:53.818360 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jan 30 14:10:53.825289 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:53.911268 kernel: loop2: detected capacity change from 0 to 114432 Jan 30 14:10:54.226253 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 14:10:54.496265 kernel: loop4: detected capacity change from 0 to 189592 Jan 30 14:10:54.508249 kernel: loop5: detected capacity change from 0 to 31320 Jan 30 14:10:54.519249 kernel: loop6: detected capacity change from 0 to 114432 Jan 30 14:10:54.530360 kernel: loop7: detected capacity change from 0 to 114328 Jan 30 14:10:54.542371 (sd-merge)[1343]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 14:10:54.542806 (sd-merge)[1343]: Merged extensions into '/usr'. Jan 30 14:10:54.548893 systemd[1]: Reloading requested from client PID 1319 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:10:54.549164 systemd[1]: Reloading... Jan 30 14:10:54.618243 zram_generator::config[1369]: No configuration found. Jan 30 14:10:54.754570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:54.811040 systemd[1]: Reloading finished in 261 ms. Jan 30 14:10:54.840948 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:10:54.847943 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:10:54.865471 systemd[1]: Starting ensure-sysext.service... Jan 30 14:10:54.870310 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:54.878418 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:54.894920 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:10:54.895196 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:10:54.895870 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:10:54.896093 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Jan 30 14:10:54.896147 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Jan 30 14:10:54.898916 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:54.898928 systemd-tmpfiles[1426]: Skipping /boot Jan 30 14:10:54.905760 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:54.905776 systemd-tmpfiles[1426]: Skipping /boot Jan 30 14:10:54.910805 systemd-udevd[1427]: Using default interface naming scheme 'v255'. Jan 30 14:10:54.925204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:54.939469 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:54.948495 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:10:54.957520 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:10:54.970575 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:54.979645 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:10:54.988020 systemd[1]: Reloading requested from client PID 1425 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:10:54.988139 systemd[1]: Reloading... Jan 30 14:10:55.077253 zram_generator::config[1480]: No configuration found. Jan 30 14:10:55.186242 augenrules[1553]: No rules Jan 30 14:10:55.232902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:55.309122 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 14:10:55.309341 systemd[1]: Reloading finished in 320 ms. Jan 30 14:10:55.325005 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:55.351242 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:10:55.351354 kernel: hv_vmbus: registering driver hv_balloon Jan 30 14:10:55.352976 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:55.358243 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 14:10:55.358303 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 14:10:55.358325 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 30 14:10:55.382730 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 14:10:55.382835 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 14:10:55.389775 kernel: Console: switching to colour dummy device 80x25 Jan 30 14:10:55.397277 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:55.393429 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:10:55.436407 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:10:55.467183 systemd[1]: Finished ensure-sysext.service. Jan 30 14:10:55.477250 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jan 30 14:10:55.478715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:55.489495 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1526) Jan 30 14:10:55.494974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:55.504421 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:55.518191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:55.532055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:55.542797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:55.549483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:55.555181 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:10:55.562310 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:10:55.571497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:55.578007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:55.578167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:55.586881 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:55.587038 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:55.593197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:55.593353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:55.600128 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:55.600283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:55.634169 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:10:55.643106 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:10:55.660545 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:10:55.670617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:55.670682 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:55.673277 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:10:55.690075 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:10:55.698063 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:10:55.705717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:55.707292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:55.723372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:55.729652 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:55.736268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:10:55.769598 systemd-resolved[1440]: Positive Trust Anchors: Jan 30 14:10:55.769620 systemd-resolved[1440]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:55.769651 systemd-resolved[1440]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:55.778815 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:55.786295 systemd-resolved[1440]: Using system hostname 'ci-4081.3.0-a-1247579205'. Jan 30 14:10:55.788063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:55.796393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:55.808002 systemd-networkd[1621]: lo: Link UP Jan 30 14:10:55.808012 systemd-networkd[1621]: lo: Gained carrier Jan 30 14:10:55.809939 systemd-networkd[1621]: Enumeration completed Jan 30 14:10:55.810073 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:55.810757 systemd-networkd[1621]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:55.810763 systemd-networkd[1621]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:55.816763 systemd[1]: Reached target network.target - Network. Jan 30 14:10:55.827419 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:10:55.834877 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:10:55.843470 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:55.856340 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:10:55.867131 lvm[1650]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:55.894247 kernel: mlx5_core f731:00:02.0 enP63281s1: Link up Jan 30 14:10:55.896884 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:10:55.921195 kernel: hv_netvsc 00224877-20a3-0022-4877-20a300224877 eth0: Data path switched to VF: enP63281s1 Jan 30 14:10:55.922322 systemd-networkd[1621]: enP63281s1: Link UP Jan 30 14:10:55.922564 systemd-networkd[1621]: eth0: Link UP Jan 30 14:10:55.922576 systemd-networkd[1621]: eth0: Gained carrier Jan 30 14:10:55.922592 systemd-networkd[1621]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:55.929591 systemd-networkd[1621]: enP63281s1: Gained carrier Jan 30 14:10:55.930786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:55.940317 systemd-networkd[1621]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:10:57.532439 systemd-networkd[1621]: enP63281s1: Gained IPv6LL Jan 30 14:10:57.532738 systemd-networkd[1621]: eth0: Gained IPv6LL Jan 30 14:10:57.535921 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:10:57.543521 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:10:58.413792 ldconfig[1314]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:10:58.446000 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:10:58.457398 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:10:58.470939 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:10:58.478012 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:58.483803 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:10:58.490707 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:10:58.497403 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:10:58.503083 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:10:58.509678 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:10:58.516188 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:10:58.516218 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:58.521053 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:58.526750 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:10:58.533913 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:10:58.545984 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:10:58.552112 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:10:58.557751 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:58.562720 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:58.567599 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:58.567624 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:58.573367 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 14:10:58.581375 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:10:58.589399 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:10:58.603787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:10:58.612205 (chronyd)[1661]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 14:10:58.612382 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:10:58.621342 jq[1667]: false Jan 30 14:10:58.624471 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:10:58.630547 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:10:58.630595 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 14:10:58.631802 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 14:10:58.637847 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 14:10:58.640712 KVP[1669]: KVP starting; pid is:1669 Jan 30 14:10:58.642172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:58.644652 chronyd[1672]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 14:10:58.657694 KVP[1669]: KVP LIC Version: 3.1 Jan 30 14:10:58.659326 kernel: hv_utils: KVP IC version 4.0 Jan 30 14:10:58.661900 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:10:58.661689 chronyd[1672]: Timezone right/UTC failed leap second check, ignoring Jan 30 14:10:58.661923 chronyd[1672]: Loaded seccomp filter (level 2) Jan 30 14:10:58.674419 dbus-daemon[1664]: [system] SELinux support is enabled Jan 30 14:10:58.678288 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:10:58.692911 extend-filesystems[1668]: Found loop4 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found loop5 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found loop6 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found loop7 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda1 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda2 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda3 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found usr Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda4 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda6 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda7 Jan 30 14:10:58.703398 extend-filesystems[1668]: Found sda9 Jan 30 14:10:58.703398 extend-filesystems[1668]: Checking size of /dev/sda9 Jan 30 14:10:58.695382 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:10:58.861110 extend-filesystems[1668]: Old size kept for /dev/sda9 Jan 30 14:10:58.861110 extend-filesystems[1668]: Found sr0 Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.749 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.767 INFO Fetch successful Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.767 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.776 INFO Fetch successful Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.777 INFO Fetching http://168.63.129.16/machine/5b0b130f-6a6c-4cad-8d73-b1f02dc327aa/8aef268c%2Dcfd9%2D4239%2Db66a%2Dd96b7e9e9be7.%5Fci%2D4081.3.0%2Da%2D1247579205?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.779 INFO Fetch successful Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.779 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:10:58.884054 coreos-metadata[1663]: Jan 30 14:10:58.793 INFO Fetch successful Jan 30 14:10:58.714063 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:10:58.744639 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:10:58.769541 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:10:58.787439 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:10:58.787941 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:10:58.898164 update_engine[1698]: I20250130 14:10:58.870509 1698 main.cc:92] Flatcar Update Engine starting Jan 30 14:10:58.898164 update_engine[1698]: I20250130 14:10:58.879413 1698 update_check_scheduler.cc:74] Next update check in 11m50s Jan 30 14:10:58.792614 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:10:58.898839 jq[1704]: true Jan 30 14:10:58.832607 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:10:58.844192 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:10:58.855907 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 14:10:58.889781 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:10:58.889933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:10:58.890191 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:10:58.890413 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:10:58.904789 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:10:58.905015 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:10:58.922428 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:10:58.929930 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:10:58.930135 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:10:58.962044 (ntainerd)[1721]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:10:58.969114 systemd-logind[1694]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 30 14:10:58.974345 systemd-logind[1694]: New seat seat0. Jan 30 14:10:58.979351 jq[1719]: true Jan 30 14:10:58.977114 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:10:58.992114 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:10:59.011268 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:10:59.040530 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1709) Jan 30 14:10:59.058908 tar[1718]: linux-arm64/helm Jan 30 14:10:59.068694 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:10:59.069704 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:10:59.069828 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:10:59.086771 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:10:59.086888 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:10:59.107455 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:10:59.265734 bash[1786]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:59.269130 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:10:59.278090 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 14:10:59.470693 locksmithd[1778]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:10:59.598003 tar[1718]: linux-arm64/LICENSE Jan 30 14:10:59.598003 tar[1718]: linux-arm64/README.md Jan 30 14:10:59.724464 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:10:59.992746 containerd[1721]: time="2025-01-30T14:10:59.992650340Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:11:00.056264 containerd[1721]: time="2025-01-30T14:11:00.055695940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057259 containerd[1721]: time="2025-01-30T14:11:00.057206980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057290 containerd[1721]: time="2025-01-30T14:11:00.057258060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:11:00.057290 containerd[1721]: time="2025-01-30T14:11:00.057278100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:11:00.057467 containerd[1721]: time="2025-01-30T14:11:00.057443140Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:11:00.057503 containerd[1721]: time="2025-01-30T14:11:00.057469380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057553 containerd[1721]: time="2025-01-30T14:11:00.057531780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057585 containerd[1721]: time="2025-01-30T14:11:00.057550620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057747 containerd[1721]: time="2025-01-30T14:11:00.057722060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057747 containerd[1721]: time="2025-01-30T14:11:00.057744100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057792 containerd[1721]: time="2025-01-30T14:11:00.057758180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057792 containerd[1721]: time="2025-01-30T14:11:00.057768460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:11:00.057861 containerd[1721]: time="2025-01-30T14:11:00.057839940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:11:00.058068 containerd[1721]: time="2025-01-30T14:11:00.058045100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:11:00.059481 containerd[1721]: time="2025-01-30T14:11:00.058157020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:11:00.059481 containerd[1721]: time="2025-01-30T14:11:00.059087420Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:11:00.059481 containerd[1721]: time="2025-01-30T14:11:00.059200540Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:11:00.059481 containerd[1721]: time="2025-01-30T14:11:00.059277740Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:11:00.113739 containerd[1721]: time="2025-01-30T14:11:00.113692860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:11:00.115088 containerd[1721]: time="2025-01-30T14:11:00.114330940Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:11:00.115088 containerd[1721]: time="2025-01-30T14:11:00.114366740Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:11:00.115088 containerd[1721]: time="2025-01-30T14:11:00.114384660Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:11:00.115088 containerd[1721]: time="2025-01-30T14:11:00.114401380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:11:00.115088 containerd[1721]: time="2025-01-30T14:11:00.114619820Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:11:00.116323 containerd[1721]: time="2025-01-30T14:11:00.116287940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:11:00.116581 containerd[1721]: time="2025-01-30T14:11:00.116559500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:11:00.116758 containerd[1721]: time="2025-01-30T14:11:00.116733380Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:11:00.116875 containerd[1721]: time="2025-01-30T14:11:00.116840900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:11:00.117421 containerd[1721]: time="2025-01-30T14:11:00.117276100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.117421 containerd[1721]: time="2025-01-30T14:11:00.117310260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.117421 containerd[1721]: time="2025-01-30T14:11:00.117344500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.117421 containerd[1721]: time="2025-01-30T14:11:00.117376140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.118437 containerd[1721]: time="2025-01-30T14:11:00.118343100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.118437 containerd[1721]: time="2025-01-30T14:11:00.118395580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.119318 containerd[1721]: time="2025-01-30T14:11:00.118414540Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.119374 containerd[1721]: time="2025-01-30T14:11:00.119324940Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:11:00.119374 containerd[1721]: time="2025-01-30T14:11:00.119357060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119422 containerd[1721]: time="2025-01-30T14:11:00.119378260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119422 containerd[1721]: time="2025-01-30T14:11:00.119396260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119422 containerd[1721]: time="2025-01-30T14:11:00.119415380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119495 containerd[1721]: time="2025-01-30T14:11:00.119429820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119495 containerd[1721]: time="2025-01-30T14:11:00.119449100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119495 containerd[1721]: time="2025-01-30T14:11:00.119464940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119495 containerd[1721]: time="2025-01-30T14:11:00.119483260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119561 containerd[1721]: time="2025-01-30T14:11:00.119500460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119561 containerd[1721]: time="2025-01-30T14:11:00.119521940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119561 containerd[1721]: time="2025-01-30T14:11:00.119538540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119561 containerd[1721]: time="2025-01-30T14:11:00.119551620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119631 containerd[1721]: time="2025-01-30T14:11:00.119568420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119631 containerd[1721]: time="2025-01-30T14:11:00.119590540Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:11:00.119631 containerd[1721]: time="2025-01-30T14:11:00.119620020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119680 containerd[1721]: time="2025-01-30T14:11:00.119636620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119680 containerd[1721]: time="2025-01-30T14:11:00.119652340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:11:00.119840 containerd[1721]: time="2025-01-30T14:11:00.119713860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:11:00.119840 containerd[1721]: time="2025-01-30T14:11:00.119744060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:11:00.119840 containerd[1721]: time="2025-01-30T14:11:00.119761180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:11:00.119840 containerd[1721]: time="2025-01-30T14:11:00.119776940Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:11:00.119840 containerd[1721]: time="2025-01-30T14:11:00.119787260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.119840 containerd[1721]: time="2025-01-30T14:11:00.119808980Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:11:00.120672 containerd[1721]: time="2025-01-30T14:11:00.119824300Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:11:00.122481 containerd[1721]: time="2025-01-30T14:11:00.122418900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:11:00.124855 containerd[1721]: time="2025-01-30T14:11:00.122915500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:11:00.124855 containerd[1721]: time="2025-01-30T14:11:00.122984380Z" level=info msg="Connect containerd service" Jan 30 14:11:00.124855 containerd[1721]: time="2025-01-30T14:11:00.123021940Z" level=info msg="using legacy CRI server" Jan 30 14:11:00.124855 containerd[1721]: time="2025-01-30T14:11:00.123033580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:11:00.124855 containerd[1721]: time="2025-01-30T14:11:00.123122380Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:11:00.127610 containerd[1721]: time="2025-01-30T14:11:00.127564220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:11:00.128105 containerd[1721]: time="2025-01-30T14:11:00.128007940Z" level=info msg="Start subscribing containerd event" Jan 30 14:11:00.128105 containerd[1721]: time="2025-01-30T14:11:00.128076020Z" level=info msg="Start recovering state" Jan 30 14:11:00.128178 containerd[1721]: time="2025-01-30T14:11:00.128149740Z" level=info msg="Start event monitor" Jan 30 14:11:00.128178 containerd[1721]: time="2025-01-30T14:11:00.128161860Z" level=info msg="Start snapshots syncer" Jan 30 14:11:00.128178 containerd[1721]: time="2025-01-30T14:11:00.128171940Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:11:00.128178 containerd[1721]: time="2025-01-30T14:11:00.128179020Z" level=info msg="Start streaming server" Jan 30 14:11:00.130290 containerd[1721]: time="2025-01-30T14:11:00.128040780Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:11:00.130290 containerd[1721]: time="2025-01-30T14:11:00.128363460Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:11:00.130290 containerd[1721]: time="2025-01-30T14:11:00.128409540Z" level=info msg="containerd successfully booted in 0.140603s" Jan 30 14:11:00.132386 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:11:00.156460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:00.171791 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:00.572988 kubelet[1808]: E0130 14:11:00.572939 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:00.575928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:00.576062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:00.852871 sshd_keygen[1693]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:11:00.871619 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:11:00.882468 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:11:00.889450 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 14:11:00.898920 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:11:00.899109 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:11:00.912527 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:11:00.919437 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 14:11:00.958599 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:11:00.972589 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:11:00.979046 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 14:11:00.985653 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:11:00.990724 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:11:00.997787 systemd[1]: Startup finished in 684ms (kernel) + 11.725s (initrd) + 11.182s (userspace) = 23.592s. Jan 30 14:11:01.280657 login[1839]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 30 14:11:01.281092 login[1838]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:01.289364 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:11:01.294468 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:11:01.297302 systemd-logind[1694]: New session 2 of user core. Jan 30 14:11:01.306945 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:11:01.317550 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:11:01.320907 (systemd)[1846]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:11:01.452174 systemd[1846]: Queued start job for default target default.target. Jan 30 14:11:01.459185 systemd[1846]: Created slice app.slice - User Application Slice. Jan 30 14:11:01.459376 systemd[1846]: Reached target paths.target - Paths. Jan 30 14:11:01.459443 systemd[1846]: Reached target timers.target - Timers. Jan 30 14:11:01.461373 systemd[1846]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:11:01.472047 systemd[1846]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:11:01.472123 systemd[1846]: Reached target sockets.target - Sockets. Jan 30 14:11:01.472136 systemd[1846]: Reached target basic.target - Basic System. Jan 30 14:11:01.472184 systemd[1846]: Reached target default.target - Main User Target. Jan 30 14:11:01.472213 systemd[1846]: Startup finished in 145ms. Jan 30 14:11:01.472365 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:11:01.473922 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:11:02.282244 login[1839]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:02.287301 systemd-logind[1694]: New session 1 of user core. Jan 30 14:11:02.292370 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:11:02.441931 waagent[1835]: 2025-01-30T14:11:02.441835Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 14:11:02.447702 waagent[1835]: 2025-01-30T14:11:02.447623Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 30 14:11:02.452793 waagent[1835]: 2025-01-30T14:11:02.452735Z INFO Daemon Daemon Python: 3.11.9 Jan 30 14:11:02.457302 waagent[1835]: 2025-01-30T14:11:02.457239Z INFO Daemon Daemon Run daemon Jan 30 14:11:02.461248 waagent[1835]: 2025-01-30T14:11:02.461184Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 30 14:11:02.471159 waagent[1835]: 2025-01-30T14:11:02.471084Z INFO Daemon Daemon Using waagent for provisioning Jan 30 14:11:02.476325 waagent[1835]: 2025-01-30T14:11:02.476263Z INFO Daemon Daemon Activate resource disk Jan 30 14:11:02.480655 waagent[1835]: 2025-01-30T14:11:02.480604Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 14:11:02.491995 waagent[1835]: 2025-01-30T14:11:02.491936Z INFO Daemon Daemon Found device: None Jan 30 14:11:02.496440 waagent[1835]: 2025-01-30T14:11:02.496391Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 14:11:02.504511 waagent[1835]: 2025-01-30T14:11:02.504458Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 14:11:02.516830 waagent[1835]: 2025-01-30T14:11:02.516772Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 14:11:02.522299 waagent[1835]: 2025-01-30T14:11:02.522254Z INFO Daemon Daemon Running default provisioning handler Jan 30 14:11:02.533620 waagent[1835]: 2025-01-30T14:11:02.533497Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 14:11:02.546495 waagent[1835]: 2025-01-30T14:11:02.546428Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 14:11:02.555492 waagent[1835]: 2025-01-30T14:11:02.555436Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 14:11:02.560148 waagent[1835]: 2025-01-30T14:11:02.560104Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 14:11:02.693249 waagent[1835]: 2025-01-30T14:11:02.690216Z INFO Daemon Daemon Successfully mounted dvd Jan 30 14:11:02.705553 waagent[1835]: 2025-01-30T14:11:02.705467Z INFO Daemon Daemon Detect protocol endpoint Jan 30 14:11:02.705786 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 14:11:02.710360 waagent[1835]: 2025-01-30T14:11:02.710192Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 14:11:02.715708 waagent[1835]: 2025-01-30T14:11:02.715659Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 14:11:02.722348 waagent[1835]: 2025-01-30T14:11:02.722295Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 14:11:02.727383 waagent[1835]: 2025-01-30T14:11:02.727335Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 14:11:02.732159 waagent[1835]: 2025-01-30T14:11:02.732114Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 14:11:02.771708 waagent[1835]: 2025-01-30T14:11:02.771662Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 14:11:02.777986 waagent[1835]: 2025-01-30T14:11:02.777957Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 14:11:02.783175 waagent[1835]: 2025-01-30T14:11:02.783123Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 14:11:02.974267 waagent[1835]: 2025-01-30T14:11:02.973916Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 14:11:02.980328 waagent[1835]: 2025-01-30T14:11:02.980229Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 14:11:02.989443 waagent[1835]: 2025-01-30T14:11:02.989387Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 14:11:03.011389 waagent[1835]: 2025-01-30T14:11:03.011341Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 30 14:11:03.017468 waagent[1835]: 2025-01-30T14:11:03.017423Z INFO Daemon Jan 30 14:11:03.020433 waagent[1835]: 2025-01-30T14:11:03.020390Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 73aa6324-1907-4cbc-9a90-47776b7fe637 eTag: 11990766963754194352 source: Fabric] Jan 30 14:11:03.250786 waagent[1835]: 2025-01-30T14:11:03.250675Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 14:11:03.259386 waagent[1835]: 2025-01-30T14:11:03.259334Z INFO Daemon Jan 30 14:11:03.262435 waagent[1835]: 2025-01-30T14:11:03.262390Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 14:11:03.273478 waagent[1835]: 2025-01-30T14:11:03.273440Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 14:11:03.357262 waagent[1835]: 2025-01-30T14:11:03.356895Z INFO Daemon Downloaded certificate {'thumbprint': 'E69A9FCE99F0BBCE571ECA706F3B902BFD546285', 'hasPrivateKey': True} Jan 30 14:11:03.367059 waagent[1835]: 2025-01-30T14:11:03.367008Z INFO Daemon Downloaded certificate {'thumbprint': 'BFE73FF83E7B200B92E6C9D1AD763FE4B2AC0FF0', 'hasPrivateKey': False} Jan 30 14:11:03.376372 waagent[1835]: 2025-01-30T14:11:03.376323Z INFO Daemon Fetch goal state completed Jan 30 14:11:03.387697 waagent[1835]: 2025-01-30T14:11:03.387653Z INFO Daemon Daemon Starting provisioning Jan 30 14:11:03.392676 waagent[1835]: 2025-01-30T14:11:03.392617Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 14:11:03.398050 waagent[1835]: 2025-01-30T14:11:03.397996Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-1247579205] Jan 30 14:11:03.422241 waagent[1835]: 2025-01-30T14:11:03.420022Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-1247579205] Jan 30 14:11:03.426972 waagent[1835]: 2025-01-30T14:11:03.426907Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 14:11:03.433082 waagent[1835]: 2025-01-30T14:11:03.433027Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 14:11:03.461158 systemd-networkd[1621]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:11:03.461347 systemd-networkd[1621]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:11:03.461375 systemd-networkd[1621]: eth0: DHCP lease lost Jan 30 14:11:03.462312 waagent[1835]: 2025-01-30T14:11:03.462180Z INFO Daemon Daemon Create user account if not exists Jan 30 14:11:03.467544 waagent[1835]: 2025-01-30T14:11:03.467484Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 14:11:03.473365 waagent[1835]: 2025-01-30T14:11:03.473312Z INFO Daemon Daemon Configure sudoer Jan 30 14:11:03.477978 waagent[1835]: 2025-01-30T14:11:03.477920Z INFO Daemon Daemon Configure sshd Jan 30 14:11:03.482232 waagent[1835]: 2025-01-30T14:11:03.482176Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 14:11:03.494081 systemd-networkd[1621]: eth0: DHCPv6 lease lost Jan 30 14:11:03.494325 waagent[1835]: 2025-01-30T14:11:03.494258Z INFO Daemon Daemon Deploy ssh public key. Jan 30 14:11:03.512283 systemd-networkd[1621]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:11:04.617600 waagent[1835]: 2025-01-30T14:11:04.617545Z INFO Daemon Daemon Provisioning complete Jan 30 14:11:04.636414 waagent[1835]: 2025-01-30T14:11:04.636362Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 14:11:04.642624 waagent[1835]: 2025-01-30T14:11:04.642569Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 14:11:04.651926 waagent[1835]: 2025-01-30T14:11:04.651874Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 14:11:04.784269 waagent[1901]: 2025-01-30T14:11:04.784088Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 14:11:04.784596 waagent[1901]: 2025-01-30T14:11:04.784282Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 30 14:11:04.784596 waagent[1901]: 2025-01-30T14:11:04.784351Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 30 14:11:04.823630 waagent[1901]: 2025-01-30T14:11:04.823543Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 14:11:04.823827 waagent[1901]: 2025-01-30T14:11:04.823784Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:11:04.823890 waagent[1901]: 2025-01-30T14:11:04.823859Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:11:04.832388 waagent[1901]: 2025-01-30T14:11:04.832316Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 14:11:04.838381 waagent[1901]: 2025-01-30T14:11:04.838336Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 30 14:11:04.838922 waagent[1901]: 2025-01-30T14:11:04.838878Z INFO ExtHandler Jan 30 14:11:04.838994 waagent[1901]: 2025-01-30T14:11:04.838965Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7bd27f41-acff-4673-8d51-4d6055e176b6 eTag: 11990766963754194352 source: Fabric] Jan 30 14:11:04.839333 waagent[1901]: 2025-01-30T14:11:04.839288Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 14:11:04.839931 waagent[1901]: 2025-01-30T14:11:04.839884Z INFO ExtHandler Jan 30 14:11:04.839994 waagent[1901]: 2025-01-30T14:11:04.839967Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 14:11:04.843975 waagent[1901]: 2025-01-30T14:11:04.843942Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 14:11:04.929845 waagent[1901]: 2025-01-30T14:11:04.929742Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E69A9FCE99F0BBCE571ECA706F3B902BFD546285', 'hasPrivateKey': True} Jan 30 14:11:04.930309 waagent[1901]: 2025-01-30T14:11:04.930253Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BFE73FF83E7B200B92E6C9D1AD763FE4B2AC0FF0', 'hasPrivateKey': False} Jan 30 14:11:04.930760 waagent[1901]: 2025-01-30T14:11:04.930712Z INFO ExtHandler Fetch goal state completed Jan 30 14:11:04.947323 waagent[1901]: 2025-01-30T14:11:04.947264Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1901 Jan 30 14:11:04.947479 waagent[1901]: 2025-01-30T14:11:04.947444Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 14:11:04.949098 waagent[1901]: 2025-01-30T14:11:04.949052Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 14:11:04.949499 waagent[1901]: 2025-01-30T14:11:04.949460Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 14:11:05.008336 waagent[1901]: 2025-01-30T14:11:05.008288Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 14:11:05.008539 waagent[1901]: 2025-01-30T14:11:05.008499Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 14:11:05.014947 waagent[1901]: 2025-01-30T14:11:05.014449Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 14:11:05.021175 systemd[1]: Reloading requested from client PID 1916 ('systemctl') (unit waagent.service)... Jan 30 14:11:05.021190 systemd[1]: Reloading... Jan 30 14:11:05.098372 zram_generator::config[1950]: No configuration found. Jan 30 14:11:05.203106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:11:05.280862 systemd[1]: Reloading finished in 259 ms. Jan 30 14:11:05.305910 waagent[1901]: 2025-01-30T14:11:05.302578Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 14:11:05.309285 systemd[1]: Reloading requested from client PID 2004 ('systemctl') (unit waagent.service)... Jan 30 14:11:05.309298 systemd[1]: Reloading... Jan 30 14:11:05.389257 zram_generator::config[2038]: No configuration found. Jan 30 14:11:05.503902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:11:05.582191 systemd[1]: Reloading finished in 272 ms. Jan 30 14:11:05.610459 waagent[1901]: 2025-01-30T14:11:05.608565Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 14:11:05.610459 waagent[1901]: 2025-01-30T14:11:05.608731Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 14:11:06.380277 waagent[1901]: 2025-01-30T14:11:06.380056Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 14:11:06.380765 waagent[1901]: 2025-01-30T14:11:06.380706Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 14:11:06.381625 waagent[1901]: 2025-01-30T14:11:06.381508Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 14:11:06.382081 waagent[1901]: 2025-01-30T14:11:06.382005Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 14:11:06.382368 waagent[1901]: 2025-01-30T14:11:06.382256Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:11:06.382624 waagent[1901]: 2025-01-30T14:11:06.382556Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 14:11:06.382799 waagent[1901]: 2025-01-30T14:11:06.382729Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 14:11:06.383134 waagent[1901]: 2025-01-30T14:11:06.383061Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 14:11:06.383390 waagent[1901]: 2025-01-30T14:11:06.383299Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 14:11:06.384093 waagent[1901]: 2025-01-30T14:11:06.384004Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:11:06.384937 waagent[1901]: 2025-01-30T14:11:06.384453Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 14:11:06.384937 waagent[1901]: 2025-01-30T14:11:06.384664Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 14:11:06.384937 waagent[1901]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 14:11:06.384937 waagent[1901]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 14:11:06.384937 waagent[1901]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 14:11:06.384937 waagent[1901]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:11:06.384937 waagent[1901]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:11:06.384937 waagent[1901]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:11:06.385500 waagent[1901]: 2025-01-30T14:11:06.385443Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 14:11:06.385640 waagent[1901]: 2025-01-30T14:11:06.385587Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:11:06.385784 waagent[1901]: 2025-01-30T14:11:06.385749Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:11:06.388983 waagent[1901]: 2025-01-30T14:11:06.388930Z INFO EnvHandler ExtHandler Configure routes Jan 30 14:11:06.389675 waagent[1901]: 2025-01-30T14:11:06.389626Z INFO ExtHandler ExtHandler Jan 30 14:11:06.390250 waagent[1901]: 2025-01-30T14:11:06.390124Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 45a3e7a3-7168-431e-8eb1-355b9351d4f3 correlation fa6e91dd-3ae0-4e48-8e83-2b9b50eb973f created: 2025-01-30T14:09:54.501183Z] Jan 30 14:11:06.391395 waagent[1901]: 2025-01-30T14:11:06.391289Z INFO EnvHandler ExtHandler Gateway:None Jan 30 14:11:06.391815 waagent[1901]: 2025-01-30T14:11:06.391756Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 14:11:06.391986 waagent[1901]: 2025-01-30T14:11:06.391873Z INFO EnvHandler ExtHandler Routes:None Jan 30 14:11:06.394569 waagent[1901]: 2025-01-30T14:11:06.394519Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Jan 30 14:11:06.436486 waagent[1901]: 2025-01-30T14:11:06.436416Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CEFDFFD4-E6B1-475D-BE81-A51BA0BB2549;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 14:11:06.495042 waagent[1901]: 2025-01-30T14:11:06.494602Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 14:11:06.495042 waagent[1901]: Executing ['ip', '-a', '-o', 'link']: Jan 30 14:11:06.495042 waagent[1901]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 14:11:06.495042 waagent[1901]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:20:a3 brd ff:ff:ff:ff:ff:ff Jan 30 14:11:06.495042 waagent[1901]: 3: enP63281s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:20:a3 brd ff:ff:ff:ff:ff:ff\ altname enP63281p0s2 Jan 30 14:11:06.495042 waagent[1901]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 14:11:06.495042 waagent[1901]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 14:11:06.495042 waagent[1901]: 2: eth0 inet 10.200.20.19/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 14:11:06.495042 waagent[1901]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 14:11:06.495042 waagent[1901]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 14:11:06.495042 waagent[1901]: 2: eth0 inet6 fe80::222:48ff:fe77:20a3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 14:11:06.495042 waagent[1901]: 3: enP63281s1 inet6 fe80::222:48ff:fe77:20a3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 14:11:06.547241 waagent[1901]: 2025-01-30T14:11:06.547150Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 14:11:06.547241 waagent[1901]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:06.547241 waagent[1901]: pkts bytes target prot opt in out source destination Jan 30 14:11:06.547241 waagent[1901]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:06.547241 waagent[1901]: pkts bytes target prot opt in out source destination Jan 30 14:11:06.547241 waagent[1901]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:06.547241 waagent[1901]: pkts bytes target prot opt in out source destination Jan 30 14:11:06.547241 waagent[1901]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 14:11:06.547241 waagent[1901]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 14:11:06.547241 waagent[1901]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 14:11:06.550151 waagent[1901]: 2025-01-30T14:11:06.550088Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 14:11:06.550151 waagent[1901]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:06.550151 waagent[1901]: pkts bytes target prot opt in out source destination Jan 30 14:11:06.550151 waagent[1901]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:06.550151 waagent[1901]: pkts bytes target prot opt in out source destination Jan 30 14:11:06.550151 waagent[1901]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:06.550151 waagent[1901]: pkts bytes target prot opt in out source destination Jan 30 14:11:06.550151 waagent[1901]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 14:11:06.550151 waagent[1901]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 14:11:06.550151 waagent[1901]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 14:11:06.550418 waagent[1901]: 2025-01-30T14:11:06.550380Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 14:11:10.665948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:11:10.674399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:10.776828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:10.781264 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:10.829248 kubelet[2133]: E0130 14:11:10.829185 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:10.831805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:10.831933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:20.916068 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:11:20.924473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:21.198517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:21.203070 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:21.242677 kubelet[2148]: E0130 14:11:21.242580 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:21.244843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:21.244997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:22.508048 chronyd[1672]: Selected source PHC0 Jan 30 14:11:31.416028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:11:31.424481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:31.714880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:31.719394 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:31.757979 kubelet[2163]: E0130 14:11:31.757922 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:31.760528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:31.760794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:32.055147 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:11:32.056907 systemd[1]: Started sshd@0-10.200.20.19:22-10.200.16.10:40160.service - OpenSSH per-connection server daemon (10.200.16.10:40160). Jan 30 14:11:32.540056 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 40160 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:32.541424 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:32.545422 systemd-logind[1694]: New session 3 of user core. Jan 30 14:11:32.553386 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:11:32.932947 systemd[1]: Started sshd@1-10.200.20.19:22-10.200.16.10:40170.service - OpenSSH per-connection server daemon (10.200.16.10:40170). Jan 30 14:11:33.359583 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 40170 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:33.360936 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:33.365074 systemd-logind[1694]: New session 4 of user core. Jan 30 14:11:33.373422 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:11:33.690027 sshd[2176]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:33.693567 systemd[1]: sshd@1-10.200.20.19:22-10.200.16.10:40170.service: Deactivated successfully. Jan 30 14:11:33.695325 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:11:33.697666 systemd-logind[1694]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:11:33.698795 systemd-logind[1694]: Removed session 4. Jan 30 14:11:33.771466 systemd[1]: Started sshd@2-10.200.20.19:22-10.200.16.10:40186.service - OpenSSH per-connection server daemon (10.200.16.10:40186). Jan 30 14:11:34.216348 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 40186 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:34.217717 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:34.221602 systemd-logind[1694]: New session 5 of user core. Jan 30 14:11:34.229400 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:11:34.543042 sshd[2183]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:34.546908 systemd[1]: sshd@2-10.200.20.19:22-10.200.16.10:40186.service: Deactivated successfully. Jan 30 14:11:34.549014 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:11:34.549785 systemd-logind[1694]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:11:34.550901 systemd-logind[1694]: Removed session 5. Jan 30 14:11:34.621061 systemd[1]: Started sshd@3-10.200.20.19:22-10.200.16.10:40196.service - OpenSSH per-connection server daemon (10.200.16.10:40196). Jan 30 14:11:35.049140 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 40196 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:35.050498 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:35.054632 systemd-logind[1694]: New session 6 of user core. Jan 30 14:11:35.063390 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:11:35.381134 sshd[2190]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:35.384533 systemd[1]: sshd@3-10.200.20.19:22-10.200.16.10:40196.service: Deactivated successfully. Jan 30 14:11:35.386513 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:11:35.389435 systemd-logind[1694]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:11:35.390427 systemd-logind[1694]: Removed session 6. Jan 30 14:11:35.459749 systemd[1]: Started sshd@4-10.200.20.19:22-10.200.16.10:40200.service - OpenSSH per-connection server daemon (10.200.16.10:40200). Jan 30 14:11:35.890246 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 40200 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:35.891619 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:35.895370 systemd-logind[1694]: New session 7 of user core. Jan 30 14:11:35.902376 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:11:36.242826 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:11:36.243095 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:36.293015 sudo[2200]: pam_unix(sudo:session): session closed for user root Jan 30 14:11:36.362669 sshd[2197]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:36.366563 systemd[1]: sshd@4-10.200.20.19:22-10.200.16.10:40200.service: Deactivated successfully. Jan 30 14:11:36.368117 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:11:36.368781 systemd-logind[1694]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:11:36.369945 systemd-logind[1694]: Removed session 7. Jan 30 14:11:36.442939 systemd[1]: Started sshd@5-10.200.20.19:22-10.200.16.10:48524.service - OpenSSH per-connection server daemon (10.200.16.10:48524). Jan 30 14:11:36.871796 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 48524 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:36.873182 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:36.876854 systemd-logind[1694]: New session 8 of user core. Jan 30 14:11:36.884380 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:11:37.117933 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:11:37.118205 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:37.121362 sudo[2209]: pam_unix(sudo:session): session closed for user root Jan 30 14:11:37.126289 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:11:37.126928 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:37.140484 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:11:37.141781 auditctl[2212]: No rules Jan 30 14:11:37.142090 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:11:37.142285 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:11:37.145753 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:11:37.173708 augenrules[2230]: No rules Jan 30 14:11:37.174900 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:11:37.176415 sudo[2208]: pam_unix(sudo:session): session closed for user root Jan 30 14:11:37.262465 sshd[2205]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:37.264968 systemd[1]: sshd@5-10.200.20.19:22-10.200.16.10:48524.service: Deactivated successfully. Jan 30 14:11:37.266669 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:11:37.268130 systemd-logind[1694]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:11:37.269004 systemd-logind[1694]: Removed session 8. Jan 30 14:11:37.350491 systemd[1]: Started sshd@6-10.200.20.19:22-10.200.16.10:48536.service - OpenSSH per-connection server daemon (10.200.16.10:48536). Jan 30 14:11:37.792293 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 48536 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:37.793629 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:37.798423 systemd-logind[1694]: New session 9 of user core. Jan 30 14:11:37.804448 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:11:38.046811 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:11:38.047554 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:39.103491 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:11:39.103678 (dockerd)[2256]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:11:39.937832 dockerd[2256]: time="2025-01-30T14:11:39.937338408Z" level=info msg="Starting up" Jan 30 14:11:40.350410 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4265706827-merged.mount: Deactivated successfully. Jan 30 14:11:40.440588 dockerd[2256]: time="2025-01-30T14:11:40.440358052Z" level=info msg="Loading containers: start." Jan 30 14:11:40.635251 kernel: Initializing XFRM netlink socket Jan 30 14:11:40.802504 systemd-networkd[1621]: docker0: Link UP Jan 30 14:11:40.857243 dockerd[2256]: time="2025-01-30T14:11:40.856623151Z" level=info msg="Loading containers: done." Jan 30 14:11:40.887510 dockerd[2256]: time="2025-01-30T14:11:40.887467229Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:11:40.887815 dockerd[2256]: time="2025-01-30T14:11:40.887795949Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:11:40.888017 dockerd[2256]: time="2025-01-30T14:11:40.887997709Z" level=info msg="Daemon has completed initialization" Jan 30 14:11:40.961496 dockerd[2256]: time="2025-01-30T14:11:40.961434957Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:11:40.961841 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:11:41.347078 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3762742408-merged.mount: Deactivated successfully. Jan 30 14:11:41.896384 containerd[1721]: time="2025-01-30T14:11:41.896339440Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 14:11:41.915861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:11:41.926417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:42.023864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:42.030672 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:42.069599 kubelet[2400]: E0130 14:11:42.069538 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:42.071348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:42.071476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:43.094929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633643126.mount: Deactivated successfully. Jan 30 14:11:43.473644 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 30 14:11:44.409652 update_engine[1698]: I20250130 14:11:44.409583 1698 update_attempter.cc:509] Updating boot flags... Jan 30 14:11:44.468313 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2469) Jan 30 14:11:44.559662 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2472) Jan 30 14:11:45.004945 containerd[1721]: time="2025-01-30T14:11:45.004885764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:45.008034 containerd[1721]: time="2025-01-30T14:11:45.007808205Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 30 14:11:45.011124 containerd[1721]: time="2025-01-30T14:11:45.011078886Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:45.017002 containerd[1721]: time="2025-01-30T14:11:45.016953407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:45.017893 containerd[1721]: time="2025-01-30T14:11:45.017709768Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 3.121329128s" Jan 30 14:11:45.017893 containerd[1721]: time="2025-01-30T14:11:45.017748448Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 14:11:45.018357 containerd[1721]: time="2025-01-30T14:11:45.018318928Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 14:11:47.110463 containerd[1721]: time="2025-01-30T14:11:47.110401598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:47.112536 containerd[1721]: time="2025-01-30T14:11:47.112496479Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 30 14:11:47.116014 containerd[1721]: time="2025-01-30T14:11:47.115963040Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:47.121917 containerd[1721]: time="2025-01-30T14:11:47.121857602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:47.123059 containerd[1721]: time="2025-01-30T14:11:47.122899322Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.104454074s" Jan 30 14:11:47.123059 containerd[1721]: time="2025-01-30T14:11:47.122934922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 14:11:47.123711 containerd[1721]: time="2025-01-30T14:11:47.123537362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 14:11:48.890273 containerd[1721]: time="2025-01-30T14:11:48.889453973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:48.892303 containerd[1721]: time="2025-01-30T14:11:48.892255654Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 30 14:11:48.895650 containerd[1721]: time="2025-01-30T14:11:48.895603255Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:48.901009 containerd[1721]: time="2025-01-30T14:11:48.900958377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:48.902082 containerd[1721]: time="2025-01-30T14:11:48.901953458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.778385416s" Jan 30 14:11:48.902082 containerd[1721]: time="2025-01-30T14:11:48.901989578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 14:11:48.902732 containerd[1721]: time="2025-01-30T14:11:48.902558138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 14:11:50.047010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686949007.mount: Deactivated successfully. Jan 30 14:11:50.451265 containerd[1721]: time="2025-01-30T14:11:50.450635598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:50.454003 containerd[1721]: time="2025-01-30T14:11:50.453964719Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 30 14:11:50.458108 containerd[1721]: time="2025-01-30T14:11:50.458076401Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:50.462442 containerd[1721]: time="2025-01-30T14:11:50.462375242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:50.463198 containerd[1721]: time="2025-01-30T14:11:50.463057002Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.560465104s" Jan 30 14:11:50.463198 containerd[1721]: time="2025-01-30T14:11:50.463095962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 14:11:50.463957 containerd[1721]: time="2025-01-30T14:11:50.463904883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:11:51.127397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178319473.mount: Deactivated successfully. Jan 30 14:11:52.165906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 14:11:52.173617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:52.475841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:52.489684 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:52.531375 kubelet[2595]: E0130 14:11:52.531255 2595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:52.533518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:52.533663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:52.935441 containerd[1721]: time="2025-01-30T14:11:52.935382842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:52.937589 containerd[1721]: time="2025-01-30T14:11:52.937350762Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 14:11:52.943606 containerd[1721]: time="2025-01-30T14:11:52.943544684Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:52.949138 containerd[1721]: time="2025-01-30T14:11:52.949084966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:52.950274 containerd[1721]: time="2025-01-30T14:11:52.949880166Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.485940443s" Jan 30 14:11:52.950274 containerd[1721]: time="2025-01-30T14:11:52.949918366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:11:52.951044 containerd[1721]: time="2025-01-30T14:11:52.951004847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 14:11:53.552601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254441740.mount: Deactivated successfully. Jan 30 14:11:53.579165 containerd[1721]: time="2025-01-30T14:11:53.578378050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:53.580402 containerd[1721]: time="2025-01-30T14:11:53.580355810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 30 14:11:53.583577 containerd[1721]: time="2025-01-30T14:11:53.583544011Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:53.588411 containerd[1721]: time="2025-01-30T14:11:53.588361493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:53.589418 containerd[1721]: time="2025-01-30T14:11:53.589384773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 638.339406ms" Jan 30 14:11:53.589538 containerd[1721]: time="2025-01-30T14:11:53.589522373Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 14:11:53.590155 containerd[1721]: time="2025-01-30T14:11:53.590124974Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 14:11:54.295419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087913387.mount: Deactivated successfully. Jan 30 14:11:59.180062 containerd[1721]: time="2025-01-30T14:11:59.179216257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:59.181913 containerd[1721]: time="2025-01-30T14:11:59.181637299Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 30 14:11:59.185206 containerd[1721]: time="2025-01-30T14:11:59.185146262Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:59.193545 containerd[1721]: time="2025-01-30T14:11:59.193216549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:59.194140 containerd[1721]: time="2025-01-30T14:11:59.194100590Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.603938936s" Jan 30 14:11:59.194140 containerd[1721]: time="2025-01-30T14:11:59.194137830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 14:12:02.666000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 14:12:02.683908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:03.131503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:03.136082 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:03.177262 kubelet[2687]: E0130 14:12:03.176462 2687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:03.179764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:03.180060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:06.965563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:06.976598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:07.008709 systemd[1]: Reloading requested from client PID 2701 ('systemctl') (unit session-9.scope)... Jan 30 14:12:07.008888 systemd[1]: Reloading... Jan 30 14:12:07.143257 zram_generator::config[2744]: No configuration found. Jan 30 14:12:07.272909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:12:07.353911 systemd[1]: Reloading finished in 344 ms. Jan 30 14:12:07.408452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:07.412858 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:07.415684 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:12:07.415905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:07.422491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:14.919288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:14.930783 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:12:14.972109 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:12:14.973525 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:12:14.973525 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:12:14.973525 kubelet[2810]: I0130 14:12:14.972340 2810 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:12:16.803176 kubelet[2810]: I0130 14:12:16.803128 2810 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:12:16.803629 kubelet[2810]: I0130 14:12:16.803614 2810 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:12:16.804050 kubelet[2810]: I0130 14:12:16.804035 2810 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:12:16.825268 kubelet[2810]: E0130 14:12:16.825187 2810 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:16.826537 kubelet[2810]: I0130 14:12:16.826507 2810 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:12:16.834901 kubelet[2810]: E0130 14:12:16.834856 2810 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:12:16.834901 kubelet[2810]: I0130 14:12:16.834896 2810 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:12:16.840170 kubelet[2810]: I0130 14:12:16.840141 2810 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:12:16.840296 kubelet[2810]: I0130 14:12:16.840276 2810 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:12:16.840440 kubelet[2810]: I0130 14:12:16.840402 2810 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:12:16.840626 kubelet[2810]: I0130 14:12:16.840439 2810 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-1247579205","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:12:16.840719 kubelet[2810]: I0130 14:12:16.840633 2810 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:12:16.840719 kubelet[2810]: I0130 14:12:16.840642 2810 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:12:16.840798 kubelet[2810]: I0130 14:12:16.840777 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:12:16.842837 kubelet[2810]: I0130 14:12:16.842499 2810 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:12:16.842837 kubelet[2810]: I0130 14:12:16.842548 2810 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:12:16.842837 kubelet[2810]: I0130 14:12:16.842580 2810 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:12:16.842837 kubelet[2810]: I0130 14:12:16.842593 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:12:16.846966 kubelet[2810]: W0130 14:12:16.846525 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-1247579205&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:16.846966 kubelet[2810]: E0130 14:12:16.846595 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-1247579205&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:16.848800 kubelet[2810]: W0130 14:12:16.848720 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:16.848800 kubelet[2810]: E0130 14:12:16.848799 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:16.848997 kubelet[2810]: I0130 14:12:16.848910 2810 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:12:16.850657 kubelet[2810]: I0130 14:12:16.850571 2810 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:12:16.851553 kubelet[2810]: W0130 14:12:16.851328 2810 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:12:16.852871 kubelet[2810]: I0130 14:12:16.852853 2810 server.go:1269] "Started kubelet" Jan 30 14:12:16.853286 kubelet[2810]: I0130 14:12:16.853258 2810 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:12:16.854236 kubelet[2810]: I0130 14:12:16.854205 2810 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:12:16.858793 kubelet[2810]: I0130 14:12:16.858664 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:12:16.859979 kubelet[2810]: I0130 14:12:16.859276 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:12:16.859979 kubelet[2810]: I0130 14:12:16.859503 2810 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:12:16.866676 kubelet[2810]: I0130 14:12:16.866638 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:12:16.868830 kubelet[2810]: I0130 14:12:16.868805 2810 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:12:16.869304 kubelet[2810]: E0130 14:12:16.869280 2810 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-1247579205\" not found" Jan 30 14:12:16.869764 kubelet[2810]: E0130 14:12:16.860511 2810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-1247579205.181f7dd5703c8626 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-1247579205,UID:ci-4081.3.0-a-1247579205,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-1247579205,},FirstTimestamp:2025-01-30 14:12:16.852821542 +0000 UTC m=+1.918426503,LastTimestamp:2025-01-30 14:12:16.852821542 +0000 UTC m=+1.918426503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-1247579205,}" Jan 30 14:12:16.872060 kubelet[2810]: I0130 14:12:16.871314 2810 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:12:16.872060 kubelet[2810]: I0130 14:12:16.871384 2810 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:12:16.872060 kubelet[2810]: W0130 14:12:16.871881 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:16.872060 kubelet[2810]: E0130 14:12:16.871934 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:16.872060 kubelet[2810]: E0130 14:12:16.872009 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-1247579205?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="200ms" Jan 30 14:12:16.873694 kubelet[2810]: I0130 14:12:16.873669 2810 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:12:16.873921 kubelet[2810]: I0130 14:12:16.873900 2810 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:12:16.878150 kubelet[2810]: I0130 14:12:16.878121 2810 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:12:16.888671 kubelet[2810]: E0130 14:12:16.888642 2810 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:12:16.943567 kubelet[2810]: I0130 14:12:16.943522 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:12:16.945447 kubelet[2810]: I0130 14:12:16.945410 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:12:16.945447 kubelet[2810]: I0130 14:12:16.945445 2810 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:12:16.945592 kubelet[2810]: I0130 14:12:16.945465 2810 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:12:16.945592 kubelet[2810]: E0130 14:12:16.945509 2810 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:12:16.947202 kubelet[2810]: W0130 14:12:16.947155 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:16.947630 kubelet[2810]: E0130 14:12:16.947387 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:16.970391 kubelet[2810]: E0130 14:12:16.970348 2810 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-1247579205\" not found" Jan 30 14:12:17.000008 kubelet[2810]: I0130 14:12:16.999932 2810 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:12:17.000008 kubelet[2810]: I0130 14:12:16.999950 2810 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:12:17.000008 kubelet[2810]: I0130 14:12:16.999973 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:12:17.005196 kubelet[2810]: I0130 14:12:17.005077 2810 policy_none.go:49] "None policy: Start" Jan 30 14:12:17.005969 kubelet[2810]: I0130 14:12:17.005897 2810 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:12:17.005969 kubelet[2810]: I0130 14:12:17.005934 2810 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:12:17.016382 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:12:17.031313 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:12:17.034899 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:12:17.043330 kubelet[2810]: I0130 14:12:17.043202 2810 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:12:17.043451 kubelet[2810]: I0130 14:12:17.043439 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:12:17.043765 kubelet[2810]: I0130 14:12:17.043450 2810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:12:17.043765 kubelet[2810]: I0130 14:12:17.043711 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:12:17.047299 kubelet[2810]: E0130 14:12:17.047058 2810 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-1247579205\" not found" Jan 30 14:12:17.056142 systemd[1]: Created slice kubepods-burstable-pod81a4b03be168a4a9f68ace25e838d8e4.slice - libcontainer container kubepods-burstable-pod81a4b03be168a4a9f68ace25e838d8e4.slice. Jan 30 14:12:17.071251 systemd[1]: Created slice kubepods-burstable-pod98c892f572e731ee2c50d994df1e79fa.slice - libcontainer container kubepods-burstable-pod98c892f572e731ee2c50d994df1e79fa.slice. Jan 30 14:12:17.072448 kubelet[2810]: I0130 14:12:17.071608 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81a4b03be168a4a9f68ace25e838d8e4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-1247579205\" (UID: \"81a4b03be168a4a9f68ace25e838d8e4\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072448 kubelet[2810]: I0130 14:12:17.071635 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072448 kubelet[2810]: I0130 14:12:17.071657 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072448 kubelet[2810]: I0130 14:12:17.071674 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072448 kubelet[2810]: I0130 14:12:17.071692 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0e2e33deb5dfdb86672c8aa93e08dc6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-1247579205\" (UID: \"a0e2e33deb5dfdb86672c8aa93e08dc6\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072578 kubelet[2810]: I0130 14:12:17.071707 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81a4b03be168a4a9f68ace25e838d8e4-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-1247579205\" (UID: \"81a4b03be168a4a9f68ace25e838d8e4\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072578 kubelet[2810]: I0130 14:12:17.071723 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072578 kubelet[2810]: I0130 14:12:17.071748 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072578 kubelet[2810]: I0130 14:12:17.071764 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81a4b03be168a4a9f68ace25e838d8e4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-1247579205\" (UID: \"81a4b03be168a4a9f68ace25e838d8e4\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" Jan 30 14:12:17.072578 kubelet[2810]: E0130 14:12:17.072481 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-1247579205?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="400ms" Jan 30 14:12:17.081714 systemd[1]: Created slice kubepods-burstable-poda0e2e33deb5dfdb86672c8aa93e08dc6.slice - libcontainer container kubepods-burstable-poda0e2e33deb5dfdb86672c8aa93e08dc6.slice. Jan 30 14:12:17.146306 kubelet[2810]: I0130 14:12:17.146271 2810 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:17.146738 kubelet[2810]: E0130 14:12:17.146706 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:17.349018 kubelet[2810]: I0130 14:12:17.348896 2810 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:17.349501 kubelet[2810]: E0130 14:12:17.349470 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:17.370866 containerd[1721]: time="2025-01-30T14:12:17.370810196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-1247579205,Uid:81a4b03be168a4a9f68ace25e838d8e4,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:17.379763 containerd[1721]: time="2025-01-30T14:12:17.379720680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-1247579205,Uid:98c892f572e731ee2c50d994df1e79fa,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:17.385018 containerd[1721]: time="2025-01-30T14:12:17.384781083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-1247579205,Uid:a0e2e33deb5dfdb86672c8aa93e08dc6,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:17.473843 kubelet[2810]: E0130 14:12:17.473792 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-1247579205?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="800ms" Jan 30 14:12:17.752179 kubelet[2810]: I0130 14:12:17.752143 2810 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:17.752581 kubelet[2810]: E0130 14:12:17.752538 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:17.956156 kubelet[2810]: W0130 14:12:17.956070 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:17.956156 kubelet[2810]: E0130 14:12:17.956117 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:18.233000 kubelet[2810]: W0130 14:12:18.232884 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:18.233000 kubelet[2810]: E0130 14:12:18.232963 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:18.274769 kubelet[2810]: E0130 14:12:18.274716 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-1247579205?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="1.6s" Jan 30 14:12:18.300341 kubelet[2810]: W0130 14:12:18.300305 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:18.300424 kubelet[2810]: E0130 14:12:18.300353 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:18.372539 kubelet[2810]: W0130 14:12:18.372467 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-1247579205&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:18.372688 kubelet[2810]: E0130 14:12:18.372550 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-1247579205&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:18.555365 kubelet[2810]: I0130 14:12:18.555252 2810 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:18.555784 kubelet[2810]: E0130 14:12:18.555596 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:18.839992 kubelet[2810]: E0130 14:12:18.839873 2810 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:19.310366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266081511.mount: Deactivated successfully. Jan 30 14:12:19.604333 containerd[1721]: time="2025-01-30T14:12:19.604138083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:19.662405 containerd[1721]: time="2025-01-30T14:12:19.662331296Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 14:12:19.664847 containerd[1721]: time="2025-01-30T14:12:19.664804055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:19.711116 containerd[1721]: time="2025-01-30T14:12:19.710962714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:12:19.715263 containerd[1721]: time="2025-01-30T14:12:19.714666512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:19.720247 containerd[1721]: time="2025-01-30T14:12:19.719104270Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:19.722146 containerd[1721]: time="2025-01-30T14:12:19.722070949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:12:19.727369 containerd[1721]: time="2025-01-30T14:12:19.727311466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:19.728417 containerd[1721]: time="2025-01-30T14:12:19.728160306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.35726103s" Jan 30 14:12:19.729342 containerd[1721]: time="2025-01-30T14:12:19.729291145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.349290665s" Jan 30 14:12:19.736759 containerd[1721]: time="2025-01-30T14:12:19.736558342Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.351693819s" Jan 30 14:12:19.875466 kubelet[2810]: E0130 14:12:19.875329 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-1247579205?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="3.2s" Jan 30 14:12:20.025978 kubelet[2810]: W0130 14:12:20.025891 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:20.025978 kubelet[2810]: E0130 14:12:20.025941 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:20.157942 kubelet[2810]: I0130 14:12:20.157350 2810 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:20.157942 kubelet[2810]: E0130 14:12:20.157786 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:20.194722 kubelet[2810]: W0130 14:12:20.194677 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:20.194881 kubelet[2810]: E0130 14:12:20.194728 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:20.421259 kubelet[2810]: W0130 14:12:20.421198 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Jan 30 14:12:20.421468 kubelet[2810]: E0130 14:12:20.421267 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:12:20.739849 containerd[1721]: time="2025-01-30T14:12:20.739484840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:20.739849 containerd[1721]: time="2025-01-30T14:12:20.739590119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:20.739849 containerd[1721]: time="2025-01-30T14:12:20.739614199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:20.743299 containerd[1721]: time="2025-01-30T14:12:20.740495239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:20.745116 containerd[1721]: time="2025-01-30T14:12:20.744933717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:20.745116 containerd[1721]: time="2025-01-30T14:12:20.745003917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:20.745116 containerd[1721]: time="2025-01-30T14:12:20.745023717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:20.746445 containerd[1721]: time="2025-01-30T14:12:20.746304716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:20.748055 containerd[1721]: time="2025-01-30T14:12:20.747484956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:20.748055 containerd[1721]: time="2025-01-30T14:12:20.747541596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:20.748055 containerd[1721]: time="2025-01-30T14:12:20.747566916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:20.748055 containerd[1721]: time="2025-01-30T14:12:20.747654476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:20.794459 systemd[1]: Started cri-containerd-02dfe0c7492a56d633686c37cc4f8195bd0391b9474cc6369a8d18c887ce0fb4.scope - libcontainer container 02dfe0c7492a56d633686c37cc4f8195bd0391b9474cc6369a8d18c887ce0fb4. Jan 30 14:12:20.796647 systemd[1]: Started cri-containerd-35f0ea72c1c0ff8f94d1e0798123fa03ede05f70f7e3df993d4e89d1e1eef701.scope - libcontainer container 35f0ea72c1c0ff8f94d1e0798123fa03ede05f70f7e3df993d4e89d1e1eef701. Jan 30 14:12:20.798021 systemd[1]: Started cri-containerd-a62c8d592c63544bbe5a893e08592b1a23f9731e803810a531d80c0a710db061.scope - libcontainer container a62c8d592c63544bbe5a893e08592b1a23f9731e803810a531d80c0a710db061. Jan 30 14:12:20.851751 containerd[1721]: time="2025-01-30T14:12:20.851542468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-1247579205,Uid:98c892f572e731ee2c50d994df1e79fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"02dfe0c7492a56d633686c37cc4f8195bd0391b9474cc6369a8d18c887ce0fb4\"" Jan 30 14:12:20.856470 containerd[1721]: time="2025-01-30T14:12:20.855974026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-1247579205,Uid:81a4b03be168a4a9f68ace25e838d8e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"35f0ea72c1c0ff8f94d1e0798123fa03ede05f70f7e3df993d4e89d1e1eef701\"" Jan 30 14:12:20.857466 containerd[1721]: time="2025-01-30T14:12:20.857426425Z" level=info msg="CreateContainer within sandbox \"02dfe0c7492a56d633686c37cc4f8195bd0391b9474cc6369a8d18c887ce0fb4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:12:20.859500 containerd[1721]: time="2025-01-30T14:12:20.859177024Z" level=info msg="CreateContainer within sandbox \"35f0ea72c1c0ff8f94d1e0798123fa03ede05f70f7e3df993d4e89d1e1eef701\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:12:20.862665 containerd[1721]: time="2025-01-30T14:12:20.862608863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-1247579205,Uid:a0e2e33deb5dfdb86672c8aa93e08dc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a62c8d592c63544bbe5a893e08592b1a23f9731e803810a531d80c0a710db061\"" Jan 30 14:12:20.866547 containerd[1721]: time="2025-01-30T14:12:20.866493021Z" level=info msg="CreateContainer within sandbox \"a62c8d592c63544bbe5a893e08592b1a23f9731e803810a531d80c0a710db061\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:12:20.952973 containerd[1721]: time="2025-01-30T14:12:20.952930421Z" level=info msg="CreateContainer within sandbox \"35f0ea72c1c0ff8f94d1e0798123fa03ede05f70f7e3df993d4e89d1e1eef701\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"249d6a6ad6ef0df3b4e9a17e8a3f4131c9d8019827d8de0bad0acda9c91b78f8\"" Jan 30 14:12:20.954119 containerd[1721]: time="2025-01-30T14:12:20.953897341Z" level=info msg="StartContainer for \"249d6a6ad6ef0df3b4e9a17e8a3f4131c9d8019827d8de0bad0acda9c91b78f8\"" Jan 30 14:12:20.960257 containerd[1721]: time="2025-01-30T14:12:20.958973418Z" level=info msg="CreateContainer within sandbox \"a62c8d592c63544bbe5a893e08592b1a23f9731e803810a531d80c0a710db061\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a662c69f30f94b9acb0a1c0babfcf2f6a18b278b0f855d002a8481f3e375cc09\"" Jan 30 14:12:20.960657 containerd[1721]: time="2025-01-30T14:12:20.960628098Z" level=info msg="StartContainer for \"a662c69f30f94b9acb0a1c0babfcf2f6a18b278b0f855d002a8481f3e375cc09\"" Jan 30 14:12:20.966395 containerd[1721]: time="2025-01-30T14:12:20.966349375Z" level=info msg="CreateContainer within sandbox \"02dfe0c7492a56d633686c37cc4f8195bd0391b9474cc6369a8d18c887ce0fb4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66cd9aadb394f9e55573d453927e68895b2f69570526f951c87339c508c49517\"" Jan 30 14:12:20.967471 containerd[1721]: time="2025-01-30T14:12:20.967436654Z" level=info msg="StartContainer for \"66cd9aadb394f9e55573d453927e68895b2f69570526f951c87339c508c49517\"" Jan 30 14:12:20.992452 systemd[1]: Started cri-containerd-a662c69f30f94b9acb0a1c0babfcf2f6a18b278b0f855d002a8481f3e375cc09.scope - libcontainer container a662c69f30f94b9acb0a1c0babfcf2f6a18b278b0f855d002a8481f3e375cc09. Jan 30 14:12:21.003522 systemd[1]: Started cri-containerd-249d6a6ad6ef0df3b4e9a17e8a3f4131c9d8019827d8de0bad0acda9c91b78f8.scope - libcontainer container 249d6a6ad6ef0df3b4e9a17e8a3f4131c9d8019827d8de0bad0acda9c91b78f8. Jan 30 14:12:21.015475 systemd[1]: Started cri-containerd-66cd9aadb394f9e55573d453927e68895b2f69570526f951c87339c508c49517.scope - libcontainer container 66cd9aadb394f9e55573d453927e68895b2f69570526f951c87339c508c49517. Jan 30 14:12:21.077842 containerd[1721]: time="2025-01-30T14:12:21.077783843Z" level=info msg="StartContainer for \"249d6a6ad6ef0df3b4e9a17e8a3f4131c9d8019827d8de0bad0acda9c91b78f8\" returns successfully" Jan 30 14:12:21.079018 containerd[1721]: time="2025-01-30T14:12:21.078975763Z" level=info msg="StartContainer for \"a662c69f30f94b9acb0a1c0babfcf2f6a18b278b0f855d002a8481f3e375cc09\" returns successfully" Jan 30 14:12:21.079274 containerd[1721]: time="2025-01-30T14:12:21.079108563Z" level=info msg="StartContainer for \"66cd9aadb394f9e55573d453927e68895b2f69570526f951c87339c508c49517\" returns successfully" Jan 30 14:12:23.280385 kubelet[2810]: E0130 14:12:23.280208 2810 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-1247579205\" not found" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:23.347009 kubelet[2810]: E0130 14:12:23.346898 2810 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-1247579205.181f7dd5703c8626 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-1247579205,UID:ci-4081.3.0-a-1247579205,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-1247579205,},FirstTimestamp:2025-01-30 14:12:16.852821542 +0000 UTC m=+1.918426503,LastTimestamp:2025-01-30 14:12:16.852821542 +0000 UTC m=+1.918426503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-1247579205,}" Jan 30 14:12:23.361057 kubelet[2810]: I0130 14:12:23.361010 2810 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:23.394683 kubelet[2810]: I0130 14:12:23.394578 2810 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:23.394683 kubelet[2810]: E0130 14:12:23.394637 2810 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-a-1247579205\": node \"ci-4081.3.0-a-1247579205\" not found" Jan 30 14:12:23.424843 kubelet[2810]: E0130 14:12:23.424569 2810 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-1247579205.181f7dd5725ecd67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-1247579205,UID:ci-4081.3.0-a-1247579205,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-1247579205,},FirstTimestamp:2025-01-30 14:12:16.888622439 +0000 UTC m=+1.954227400,LastTimestamp:2025-01-30 14:12:16.888622439 +0000 UTC m=+1.954227400,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-1247579205,}" Jan 30 14:12:23.484870 kubelet[2810]: E0130 14:12:23.484615 2810 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-1247579205.181f7dd578f69e3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-1247579205,UID:ci-4081.3.0-a-1247579205,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.0-a-1247579205 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-1247579205,},FirstTimestamp:2025-01-30 14:12:16.999235133 +0000 UTC m=+2.064840094,LastTimestamp:2025-01-30 14:12:16.999235133 +0000 UTC m=+2.064840094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-1247579205,}" Jan 30 14:12:23.850138 kubelet[2810]: I0130 14:12:23.850103 2810 apiserver.go:52] "Watching apiserver" Jan 30 14:12:23.871961 kubelet[2810]: I0130 14:12:23.871924 2810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:12:23.985094 kubelet[2810]: E0130 14:12:23.985055 2810 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-1247579205\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.0-a-1247579205" Jan 30 14:12:23.985392 kubelet[2810]: E0130 14:12:23.985347 2810 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-1247579205\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" Jan 30 14:12:25.361689 systemd[1]: Reloading requested from client PID 3086 ('systemctl') (unit session-9.scope)... Jan 30 14:12:25.361705 systemd[1]: Reloading... Jan 30 14:12:25.457281 zram_generator::config[3129]: No configuration found. Jan 30 14:12:25.562605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:12:25.654188 systemd[1]: Reloading finished in 292 ms. Jan 30 14:12:25.694567 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:25.705553 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:12:25.705758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:25.705818 systemd[1]: kubelet.service: Consumed 1.131s CPU time, 117.2M memory peak, 0B memory swap peak. Jan 30 14:12:25.712528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:25.898621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:25.912624 (kubelet)[3190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:12:25.959001 kubelet[3190]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:12:25.959001 kubelet[3190]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:12:25.959001 kubelet[3190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:12:25.959001 kubelet[3190]: I0130 14:12:25.958163 3190 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:12:25.963676 kubelet[3190]: I0130 14:12:25.963637 3190 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:12:25.963676 kubelet[3190]: I0130 14:12:25.963668 3190 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:12:25.964043 kubelet[3190]: I0130 14:12:25.964016 3190 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:12:25.965496 kubelet[3190]: I0130 14:12:25.965464 3190 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:12:25.972634 kubelet[3190]: I0130 14:12:25.972453 3190 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:12:25.977631 kubelet[3190]: E0130 14:12:25.977576 3190 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:12:25.978310 kubelet[3190]: I0130 14:12:25.977724 3190 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:12:25.981053 kubelet[3190]: I0130 14:12:25.980977 3190 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:12:25.981168 kubelet[3190]: I0130 14:12:25.981085 3190 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:12:25.981194 kubelet[3190]: I0130 14:12:25.981162 3190 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:12:25.981404 kubelet[3190]: I0130 14:12:25.981186 3190 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-1247579205","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:12:25.981498 kubelet[3190]: I0130 14:12:25.981412 3190 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:12:25.981498 kubelet[3190]: I0130 14:12:25.981421 3190 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:12:25.981498 kubelet[3190]: I0130 14:12:25.981452 3190 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:12:25.981575 kubelet[3190]: I0130 14:12:25.981567 3190 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:12:25.981599 kubelet[3190]: I0130 14:12:25.981580 3190 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:12:25.981620 kubelet[3190]: I0130 14:12:25.981602 3190 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:12:25.981620 kubelet[3190]: I0130 14:12:25.981611 3190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:12:25.984420 kubelet[3190]: I0130 14:12:25.984389 3190 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:12:25.984900 kubelet[3190]: I0130 14:12:25.984866 3190 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:12:25.987452 kubelet[3190]: I0130 14:12:25.987423 3190 server.go:1269] "Started kubelet" Jan 30 14:12:25.989804 kubelet[3190]: I0130 14:12:25.989736 3190 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:12:25.990181 kubelet[3190]: I0130 14:12:25.990168 3190 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:12:25.990373 kubelet[3190]: I0130 14:12:25.990351 3190 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:12:25.991782 kubelet[3190]: I0130 14:12:25.990728 3190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:12:25.993625 kubelet[3190]: I0130 14:12:25.993602 3190 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:12:25.996181 kubelet[3190]: I0130 14:12:25.996135 3190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:12:25.997050 kubelet[3190]: I0130 14:12:25.997023 3190 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:12:25.997301 kubelet[3190]: E0130 14:12:25.997216 3190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-1247579205\" not found" Jan 30 14:12:25.997454 kubelet[3190]: I0130 14:12:25.997439 3190 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:12:26.010263 kubelet[3190]: I0130 14:12:26.010117 3190 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:12:26.025931 kubelet[3190]: I0130 14:12:26.025898 3190 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:12:26.032289 kubelet[3190]: I0130 14:12:26.032116 3190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:12:26.034572 kubelet[3190]: I0130 14:12:26.034541 3190 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:12:26.035251 kubelet[3190]: I0130 14:12:26.034891 3190 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:12:26.040346 kubelet[3190]: I0130 14:12:26.034767 3190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:12:26.040538 kubelet[3190]: I0130 14:12:26.040524 3190 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:12:26.040605 kubelet[3190]: I0130 14:12:26.040596 3190 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:12:26.040708 kubelet[3190]: E0130 14:12:26.040691 3190 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:12:26.044313 kubelet[3190]: E0130 14:12:26.044284 3190 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:12:26.094136 kubelet[3190]: I0130 14:12:26.094105 3190 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:12:26.094136 kubelet[3190]: I0130 14:12:26.094127 3190 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:12:26.149656 kubelet[3190]: I0130 14:12:26.094157 3190 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:12:26.149656 kubelet[3190]: E0130 14:12:26.141640 3190 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:12:26.150101 kubelet[3190]: I0130 14:12:26.149944 3190 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:12:26.150101 kubelet[3190]: I0130 14:12:26.149966 3190 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:12:26.150101 kubelet[3190]: I0130 14:12:26.149987 3190 policy_none.go:49] "None policy: Start" Jan 30 14:12:26.151344 kubelet[3190]: I0130 14:12:26.151319 3190 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:12:26.151422 kubelet[3190]: I0130 14:12:26.151361 3190 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:12:26.151570 kubelet[3190]: I0130 14:12:26.151550 3190 state_mem.go:75] "Updated machine memory state" Jan 30 14:12:26.155956 kubelet[3190]: I0130 14:12:26.155767 3190 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:12:26.155956 kubelet[3190]: I0130 14:12:26.155955 3190 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:12:26.156374 kubelet[3190]: I0130 14:12:26.155964 3190 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:12:26.156374 kubelet[3190]: I0130 14:12:26.156208 3190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:12:26.265879 kubelet[3190]: I0130 14:12:26.265739 3190 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:26.277319 kubelet[3190]: I0130 14:12:26.277203 3190 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:26.277319 kubelet[3190]: I0130 14:12:26.277310 3190 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-1247579205" Jan 30 14:12:26.352346 kubelet[3190]: W0130 14:12:26.352266 3190 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:12:26.354685 kubelet[3190]: W0130 14:12:26.354139 3190 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:12:26.355575 kubelet[3190]: W0130 14:12:26.355422 3190 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:12:26.413006 kubelet[3190]: I0130 14:12:26.412201 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413006 kubelet[3190]: I0130 14:12:26.412257 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413006 kubelet[3190]: I0130 14:12:26.412278 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413006 kubelet[3190]: I0130 14:12:26.412303 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413006 kubelet[3190]: I0130 14:12:26.412325 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0e2e33deb5dfdb86672c8aa93e08dc6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-1247579205\" (UID: \"a0e2e33deb5dfdb86672c8aa93e08dc6\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413284 kubelet[3190]: I0130 14:12:26.412346 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98c892f572e731ee2c50d994df1e79fa-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-1247579205\" (UID: \"98c892f572e731ee2c50d994df1e79fa\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413284 kubelet[3190]: I0130 14:12:26.412362 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81a4b03be168a4a9f68ace25e838d8e4-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-1247579205\" (UID: \"81a4b03be168a4a9f68ace25e838d8e4\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413284 kubelet[3190]: I0130 14:12:26.412379 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81a4b03be168a4a9f68ace25e838d8e4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-1247579205\" (UID: \"81a4b03be168a4a9f68ace25e838d8e4\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.413284 kubelet[3190]: I0130 14:12:26.412395 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81a4b03be168a4a9f68ace25e838d8e4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-1247579205\" (UID: \"81a4b03be168a4a9f68ace25e838d8e4\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" Jan 30 14:12:26.984255 kubelet[3190]: I0130 14:12:26.982854 3190 apiserver.go:52] "Watching apiserver" Jan 30 14:12:26.997873 kubelet[3190]: I0130 14:12:26.997803 3190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:12:27.118389 kubelet[3190]: W0130 14:12:27.118129 3190 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:12:27.118389 kubelet[3190]: E0130 14:12:27.118190 3190 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-1247579205\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" Jan 30 14:12:27.118983 kubelet[3190]: W0130 14:12:27.118794 3190 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:12:27.118983 kubelet[3190]: E0130 14:12:27.118833 3190 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-1247579205\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-1247579205" Jan 30 14:12:27.121626 kubelet[3190]: I0130 14:12:27.121476 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-1247579205" podStartSLOduration=1.12146307 podStartE2EDuration="1.12146307s" podCreationTimestamp="2025-01-30 14:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:12:27.12114551 +0000 UTC m=+1.205273787" watchObservedRunningTime="2025-01-30 14:12:27.12146307 +0000 UTC m=+1.205591347" Jan 30 14:12:27.145001 kubelet[3190]: I0130 14:12:27.144941 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-1247579205" podStartSLOduration=1.144926315 podStartE2EDuration="1.144926315s" podCreationTimestamp="2025-01-30 14:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:12:27.144623355 +0000 UTC m=+1.228751632" watchObservedRunningTime="2025-01-30 14:12:27.144926315 +0000 UTC m=+1.229054592" Jan 30 14:12:27.191269 kubelet[3190]: I0130 14:12:27.191098 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-1247579205" podStartSLOduration=1.191070085 podStartE2EDuration="1.191070085s" podCreationTimestamp="2025-01-30 14:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:12:27.165574079 +0000 UTC m=+1.249702356" watchObservedRunningTime="2025-01-30 14:12:27.191070085 +0000 UTC m=+1.275198322" Jan 30 14:12:31.155833 sudo[2241]: pam_unix(sudo:session): session closed for user root Jan 30 14:12:31.231894 sshd[2238]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:31.234715 systemd[1]: sshd@6-10.200.20.19:22-10.200.16.10:48536.service: Deactivated successfully. Jan 30 14:12:31.237514 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:12:31.237905 systemd[1]: session-9.scope: Consumed 8.074s CPU time, 152.8M memory peak, 0B memory swap peak. Jan 30 14:12:31.240390 systemd-logind[1694]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:12:31.242465 systemd-logind[1694]: Removed session 9. Jan 30 14:12:32.590117 kubelet[3190]: I0130 14:12:32.590013 3190 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:12:32.591931 containerd[1721]: time="2025-01-30T14:12:32.591215625Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:12:32.592556 kubelet[3190]: I0130 14:12:32.591863 3190 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:12:33.523307 systemd[1]: Created slice kubepods-besteffort-podf1a87b3d_60f4_467f_93f7_fa24617ab222.slice - libcontainer container kubepods-besteffort-podf1a87b3d_60f4_467f_93f7_fa24617ab222.slice. Jan 30 14:12:33.561633 kubelet[3190]: I0130 14:12:33.561514 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mmpr\" (UniqueName: \"kubernetes.io/projected/f1a87b3d-60f4-467f-93f7-fa24617ab222-kube-api-access-8mmpr\") pod \"kube-proxy-fg64w\" (UID: \"f1a87b3d-60f4-467f-93f7-fa24617ab222\") " pod="kube-system/kube-proxy-fg64w" Jan 30 14:12:33.561633 kubelet[3190]: I0130 14:12:33.561552 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1a87b3d-60f4-467f-93f7-fa24617ab222-lib-modules\") pod \"kube-proxy-fg64w\" (UID: \"f1a87b3d-60f4-467f-93f7-fa24617ab222\") " pod="kube-system/kube-proxy-fg64w" Jan 30 14:12:33.561633 kubelet[3190]: I0130 14:12:33.561591 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1a87b3d-60f4-467f-93f7-fa24617ab222-kube-proxy\") pod \"kube-proxy-fg64w\" (UID: \"f1a87b3d-60f4-467f-93f7-fa24617ab222\") " pod="kube-system/kube-proxy-fg64w" Jan 30 14:12:33.561633 kubelet[3190]: I0130 14:12:33.561607 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1a87b3d-60f4-467f-93f7-fa24617ab222-xtables-lock\") pod \"kube-proxy-fg64w\" (UID: \"f1a87b3d-60f4-467f-93f7-fa24617ab222\") " pod="kube-system/kube-proxy-fg64w" Jan 30 14:12:33.704684 systemd[1]: Created slice kubepods-besteffort-podab447879_bddd_44b7_a918_eb4379f17e0c.slice - libcontainer container kubepods-besteffort-podab447879_bddd_44b7_a918_eb4379f17e0c.slice. Jan 30 14:12:33.763448 kubelet[3190]: I0130 14:12:33.763403 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rc9z\" (UniqueName: \"kubernetes.io/projected/ab447879-bddd-44b7-a918-eb4379f17e0c-kube-api-access-2rc9z\") pod \"tigera-operator-76c4976dd7-k242d\" (UID: \"ab447879-bddd-44b7-a918-eb4379f17e0c\") " pod="tigera-operator/tigera-operator-76c4976dd7-k242d" Jan 30 14:12:33.763448 kubelet[3190]: I0130 14:12:33.763451 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab447879-bddd-44b7-a918-eb4379f17e0c-var-lib-calico\") pod \"tigera-operator-76c4976dd7-k242d\" (UID: \"ab447879-bddd-44b7-a918-eb4379f17e0c\") " pod="tigera-operator/tigera-operator-76c4976dd7-k242d" Jan 30 14:12:33.837324 containerd[1721]: time="2025-01-30T14:12:33.836853848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fg64w,Uid:f1a87b3d-60f4-467f-93f7-fa24617ab222,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:33.884703 containerd[1721]: time="2025-01-30T14:12:33.884584458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:33.884703 containerd[1721]: time="2025-01-30T14:12:33.884644658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:33.884703 containerd[1721]: time="2025-01-30T14:12:33.884656018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:33.885079 containerd[1721]: time="2025-01-30T14:12:33.884752138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:33.905448 systemd[1]: Started cri-containerd-4990bfa951d8798e4697cb7ac4599455365b0e3646497a7a84869b4834de2c87.scope - libcontainer container 4990bfa951d8798e4697cb7ac4599455365b0e3646497a7a84869b4834de2c87. Jan 30 14:12:33.927608 containerd[1721]: time="2025-01-30T14:12:33.927517107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fg64w,Uid:f1a87b3d-60f4-467f-93f7-fa24617ab222,Namespace:kube-system,Attempt:0,} returns sandbox id \"4990bfa951d8798e4697cb7ac4599455365b0e3646497a7a84869b4834de2c87\"" Jan 30 14:12:33.933661 containerd[1721]: time="2025-01-30T14:12:33.933129588Z" level=info msg="CreateContainer within sandbox \"4990bfa951d8798e4697cb7ac4599455365b0e3646497a7a84869b4834de2c87\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:12:33.983000 containerd[1721]: time="2025-01-30T14:12:33.982946159Z" level=info msg="CreateContainer within sandbox \"4990bfa951d8798e4697cb7ac4599455365b0e3646497a7a84869b4834de2c87\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ec6d5ff193df53821acd1fd1d6f7166faa2e58f548c74a9b4f57c97dfd94c78\"" Jan 30 14:12:33.984641 containerd[1721]: time="2025-01-30T14:12:33.984598239Z" level=info msg="StartContainer for \"5ec6d5ff193df53821acd1fd1d6f7166faa2e58f548c74a9b4f57c97dfd94c78\"" Jan 30 14:12:34.007022 containerd[1721]: time="2025-01-30T14:12:34.006921524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-k242d,Uid:ab447879-bddd-44b7-a918-eb4379f17e0c,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:12:34.008511 systemd[1]: Started cri-containerd-5ec6d5ff193df53821acd1fd1d6f7166faa2e58f548c74a9b4f57c97dfd94c78.scope - libcontainer container 5ec6d5ff193df53821acd1fd1d6f7166faa2e58f548c74a9b4f57c97dfd94c78. Jan 30 14:12:34.039823 containerd[1721]: time="2025-01-30T14:12:34.039749531Z" level=info msg="StartContainer for \"5ec6d5ff193df53821acd1fd1d6f7166faa2e58f548c74a9b4f57c97dfd94c78\" returns successfully" Jan 30 14:12:34.066255 containerd[1721]: time="2025-01-30T14:12:34.065689576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:34.066503 containerd[1721]: time="2025-01-30T14:12:34.065761016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:34.066503 containerd[1721]: time="2025-01-30T14:12:34.065792176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:34.066503 containerd[1721]: time="2025-01-30T14:12:34.065891696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:34.084881 systemd[1]: Started cri-containerd-ea8560424b023c35e9a23e39410918c58dba10e0e5d7b053825daa670a3304a5.scope - libcontainer container ea8560424b023c35e9a23e39410918c58dba10e0e5d7b053825daa670a3304a5. Jan 30 14:12:34.128635 containerd[1721]: time="2025-01-30T14:12:34.128131069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-k242d,Uid:ab447879-bddd-44b7-a918-eb4379f17e0c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ea8560424b023c35e9a23e39410918c58dba10e0e5d7b053825daa670a3304a5\"" Jan 30 14:12:34.131742 containerd[1721]: time="2025-01-30T14:12:34.131691150Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:12:36.094269 kubelet[3190]: I0130 14:12:36.094185 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fg64w" podStartSLOduration=3.094165557 podStartE2EDuration="3.094165557s" podCreationTimestamp="2025-01-30 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:12:34.118132267 +0000 UTC m=+8.202260544" watchObservedRunningTime="2025-01-30 14:12:36.094165557 +0000 UTC m=+10.178293834" Jan 30 14:12:36.982877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198273585.mount: Deactivated successfully. Jan 30 14:12:37.339030 containerd[1721]: time="2025-01-30T14:12:37.338883467Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:37.341719 containerd[1721]: time="2025-01-30T14:12:37.341603508Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 14:12:37.344896 containerd[1721]: time="2025-01-30T14:12:37.344819349Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:37.348960 containerd[1721]: time="2025-01-30T14:12:37.348891470Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:37.350315 containerd[1721]: time="2025-01-30T14:12:37.349774711Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.218041361s" Jan 30 14:12:37.350315 containerd[1721]: time="2025-01-30T14:12:37.349815551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 14:12:37.353410 containerd[1721]: time="2025-01-30T14:12:37.353364752Z" level=info msg="CreateContainer within sandbox \"ea8560424b023c35e9a23e39410918c58dba10e0e5d7b053825daa670a3304a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:12:37.379521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171561387.mount: Deactivated successfully. Jan 30 14:12:37.391557 containerd[1721]: time="2025-01-30T14:12:37.391407644Z" level=info msg="CreateContainer within sandbox \"ea8560424b023c35e9a23e39410918c58dba10e0e5d7b053825daa670a3304a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9b6db4388218f9ed3ea6bc0ef87ddc337099f2bf5a9401cd44eabe342ae891bb\"" Jan 30 14:12:37.393687 containerd[1721]: time="2025-01-30T14:12:37.393456764Z" level=info msg="StartContainer for \"9b6db4388218f9ed3ea6bc0ef87ddc337099f2bf5a9401cd44eabe342ae891bb\"" Jan 30 14:12:37.418448 systemd[1]: Started cri-containerd-9b6db4388218f9ed3ea6bc0ef87ddc337099f2bf5a9401cd44eabe342ae891bb.scope - libcontainer container 9b6db4388218f9ed3ea6bc0ef87ddc337099f2bf5a9401cd44eabe342ae891bb. Jan 30 14:12:37.445182 containerd[1721]: time="2025-01-30T14:12:37.445033901Z" level=info msg="StartContainer for \"9b6db4388218f9ed3ea6bc0ef87ddc337099f2bf5a9401cd44eabe342ae891bb\" returns successfully" Jan 30 14:12:38.121877 kubelet[3190]: I0130 14:12:38.121802 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-k242d" podStartSLOduration=1.9018324720000002 podStartE2EDuration="5.121784913s" podCreationTimestamp="2025-01-30 14:12:33 +0000 UTC" firstStartedPulling="2025-01-30 14:12:34.13089227 +0000 UTC m=+8.215020547" lastFinishedPulling="2025-01-30 14:12:37.350844711 +0000 UTC m=+11.434972988" observedRunningTime="2025-01-30 14:12:38.121689553 +0000 UTC m=+12.205817830" watchObservedRunningTime="2025-01-30 14:12:38.121784913 +0000 UTC m=+12.205913190" Jan 30 14:12:41.961405 systemd[1]: Created slice kubepods-besteffort-podd0b0f808_3fd9_47fe_9697_0efbcb4e10ce.slice - libcontainer container kubepods-besteffort-podd0b0f808_3fd9_47fe_9697_0efbcb4e10ce.slice. Jan 30 14:12:41.962698 kubelet[3190]: W0130 14:12:41.962009 3190 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081.3.0-a-1247579205" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object Jan 30 14:12:41.962698 kubelet[3190]: E0130 14:12:41.962052 3190 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081.3.0-a-1247579205\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object" logger="UnhandledError" Jan 30 14:12:41.962698 kubelet[3190]: W0130 14:12:41.962367 3190 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081.3.0-a-1247579205" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object Jan 30 14:12:41.962698 kubelet[3190]: E0130 14:12:41.962388 3190 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.0-a-1247579205\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object" logger="UnhandledError" Jan 30 14:12:42.011436 kubelet[3190]: I0130 14:12:42.011170 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4c4\" (UniqueName: \"kubernetes.io/projected/d0b0f808-3fd9-47fe-9697-0efbcb4e10ce-kube-api-access-cm4c4\") pod \"calico-typha-6654d894db-k6trz\" (UID: \"d0b0f808-3fd9-47fe-9697-0efbcb4e10ce\") " pod="calico-system/calico-typha-6654d894db-k6trz" Jan 30 14:12:42.011436 kubelet[3190]: I0130 14:12:42.011236 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b0f808-3fd9-47fe-9697-0efbcb4e10ce-tigera-ca-bundle\") pod \"calico-typha-6654d894db-k6trz\" (UID: \"d0b0f808-3fd9-47fe-9697-0efbcb4e10ce\") " pod="calico-system/calico-typha-6654d894db-k6trz" Jan 30 14:12:42.011436 kubelet[3190]: I0130 14:12:42.011257 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d0b0f808-3fd9-47fe-9697-0efbcb4e10ce-typha-certs\") pod \"calico-typha-6654d894db-k6trz\" (UID: \"d0b0f808-3fd9-47fe-9697-0efbcb4e10ce\") " pod="calico-system/calico-typha-6654d894db-k6trz" Jan 30 14:12:42.079537 systemd[1]: Created slice kubepods-besteffort-podc0346cc6_7e2c_4b70_b6cf_2835661b16e7.slice - libcontainer container kubepods-besteffort-podc0346cc6_7e2c_4b70_b6cf_2835661b16e7.slice. Jan 30 14:12:42.113382 kubelet[3190]: I0130 14:12:42.112457 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-cni-log-dir\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113382 kubelet[3190]: I0130 14:12:42.112521 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-cni-net-dir\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113382 kubelet[3190]: I0130 14:12:42.112539 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-var-run-calico\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113382 kubelet[3190]: I0130 14:12:42.112563 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-tigera-ca-bundle\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113382 kubelet[3190]: I0130 14:12:42.112591 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-node-certs\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113926 kubelet[3190]: I0130 14:12:42.112647 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-lib-modules\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113926 kubelet[3190]: I0130 14:12:42.112672 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-xtables-lock\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113926 kubelet[3190]: I0130 14:12:42.112696 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-var-lib-calico\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113926 kubelet[3190]: I0130 14:12:42.112721 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-flexvol-driver-host\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.113926 kubelet[3190]: I0130 14:12:42.112755 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-policysync\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.114272 kubelet[3190]: I0130 14:12:42.112779 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-cni-bin-dir\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.114272 kubelet[3190]: I0130 14:12:42.112805 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf86h\" (UniqueName: \"kubernetes.io/projected/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-kube-api-access-qf86h\") pod \"calico-node-kqpqj\" (UID: \"c0346cc6-7e2c-4b70-b6cf-2835661b16e7\") " pod="calico-system/calico-node-kqpqj" Jan 30 14:12:42.216693 kubelet[3190]: E0130 14:12:42.216307 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.216693 kubelet[3190]: W0130 14:12:42.216341 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.216693 kubelet[3190]: E0130 14:12:42.216376 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.216979 kubelet[3190]: E0130 14:12:42.216942 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.217048 kubelet[3190]: W0130 14:12:42.217035 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.217112 kubelet[3190]: E0130 14:12:42.217099 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.217390 kubelet[3190]: E0130 14:12:42.217377 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.217622 kubelet[3190]: W0130 14:12:42.217470 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.217622 kubelet[3190]: E0130 14:12:42.217489 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.217902 kubelet[3190]: E0130 14:12:42.217887 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.218083 kubelet[3190]: W0130 14:12:42.217968 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.218083 kubelet[3190]: E0130 14:12:42.217984 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.219436 kubelet[3190]: E0130 14:12:42.218334 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.219666 kubelet[3190]: W0130 14:12:42.219532 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.219666 kubelet[3190]: E0130 14:12:42.219567 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.219843 kubelet[3190]: E0130 14:12:42.219819 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.219969 kubelet[3190]: W0130 14:12:42.219904 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.220306 kubelet[3190]: E0130 14:12:42.220160 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.221314 kubelet[3190]: E0130 14:12:42.220636 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.221314 kubelet[3190]: W0130 14:12:42.220655 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.224412 kubelet[3190]: E0130 14:12:42.222276 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.224412 kubelet[3190]: E0130 14:12:42.222576 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.224412 kubelet[3190]: W0130 14:12:42.222588 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.224412 kubelet[3190]: E0130 14:12:42.222601 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.227114 kubelet[3190]: E0130 14:12:42.226275 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.227114 kubelet[3190]: W0130 14:12:42.226300 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.227114 kubelet[3190]: E0130 14:12:42.226321 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.228214 kubelet[3190]: E0130 14:12:42.227739 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.228214 kubelet[3190]: W0130 14:12:42.227765 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.228214 kubelet[3190]: E0130 14:12:42.227791 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.229321 kubelet[3190]: E0130 14:12:42.228702 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.229321 kubelet[3190]: W0130 14:12:42.228727 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.229321 kubelet[3190]: E0130 14:12:42.229032 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.230060 kubelet[3190]: E0130 14:12:42.230009 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.230060 kubelet[3190]: W0130 14:12:42.230032 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.230331 kubelet[3190]: E0130 14:12:42.230199 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.231344 kubelet[3190]: E0130 14:12:42.231313 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.231344 kubelet[3190]: W0130 14:12:42.231338 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.231552 kubelet[3190]: E0130 14:12:42.231359 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.244931 kubelet[3190]: E0130 14:12:42.244900 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.245164 kubelet[3190]: W0130 14:12:42.244986 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.245164 kubelet[3190]: E0130 14:12:42.245013 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.250456 kubelet[3190]: E0130 14:12:42.250242 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:42.303260 kubelet[3190]: E0130 14:12:42.303096 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.303260 kubelet[3190]: W0130 14:12:42.303121 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.303260 kubelet[3190]: E0130 14:12:42.303143 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.303642 kubelet[3190]: E0130 14:12:42.303512 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.303642 kubelet[3190]: W0130 14:12:42.303524 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.303642 kubelet[3190]: E0130 14:12:42.303536 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.303899 kubelet[3190]: E0130 14:12:42.303815 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.303899 kubelet[3190]: W0130 14:12:42.303827 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.303899 kubelet[3190]: E0130 14:12:42.303838 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.304317 kubelet[3190]: E0130 14:12:42.304160 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.304317 kubelet[3190]: W0130 14:12:42.304190 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.304317 kubelet[3190]: E0130 14:12:42.304203 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.304612 kubelet[3190]: E0130 14:12:42.304589 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.304718 kubelet[3190]: W0130 14:12:42.304662 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.304718 kubelet[3190]: E0130 14:12:42.304680 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.304998 kubelet[3190]: E0130 14:12:42.304934 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.304998 kubelet[3190]: W0130 14:12:42.304945 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.304998 kubelet[3190]: E0130 14:12:42.304963 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.305316 kubelet[3190]: E0130 14:12:42.305256 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.305316 kubelet[3190]: W0130 14:12:42.305268 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.305316 kubelet[3190]: E0130 14:12:42.305278 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.305650 kubelet[3190]: E0130 14:12:42.305569 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.305650 kubelet[3190]: W0130 14:12:42.305592 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.305650 kubelet[3190]: E0130 14:12:42.305602 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.305887 kubelet[3190]: E0130 14:12:42.305876 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.306024 kubelet[3190]: W0130 14:12:42.305954 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.306024 kubelet[3190]: E0130 14:12:42.305971 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.306360 kubelet[3190]: E0130 14:12:42.306261 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.306360 kubelet[3190]: W0130 14:12:42.306273 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.306360 kubelet[3190]: E0130 14:12:42.306283 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.306716 kubelet[3190]: E0130 14:12:42.306608 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.306716 kubelet[3190]: W0130 14:12:42.306620 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.306716 kubelet[3190]: E0130 14:12:42.306630 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.306871 kubelet[3190]: E0130 14:12:42.306860 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.307953 kubelet[3190]: W0130 14:12:42.306914 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.307953 kubelet[3190]: E0130 14:12:42.306928 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.308456 kubelet[3190]: E0130 14:12:42.308326 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.308456 kubelet[3190]: W0130 14:12:42.308343 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.308456 kubelet[3190]: E0130 14:12:42.308358 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.308644 kubelet[3190]: E0130 14:12:42.308633 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.308703 kubelet[3190]: W0130 14:12:42.308693 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.308760 kubelet[3190]: E0130 14:12:42.308750 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.309219 kubelet[3190]: E0130 14:12:42.309202 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.309219 kubelet[3190]: W0130 14:12:42.309302 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.309219 kubelet[3190]: E0130 14:12:42.309319 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.309919 kubelet[3190]: E0130 14:12:42.309803 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.309919 kubelet[3190]: W0130 14:12:42.309816 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.309919 kubelet[3190]: E0130 14:12:42.309848 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.310291 kubelet[3190]: E0130 14:12:42.310160 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.310291 kubelet[3190]: W0130 14:12:42.310173 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.310291 kubelet[3190]: E0130 14:12:42.310183 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.310749 kubelet[3190]: E0130 14:12:42.310527 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.310749 kubelet[3190]: W0130 14:12:42.310540 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.310749 kubelet[3190]: E0130 14:12:42.310550 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.311241 kubelet[3190]: E0130 14:12:42.310951 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.311241 kubelet[3190]: W0130 14:12:42.310964 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.311241 kubelet[3190]: E0130 14:12:42.310975 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.311514 kubelet[3190]: E0130 14:12:42.311420 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.311514 kubelet[3190]: W0130 14:12:42.311433 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.311514 kubelet[3190]: E0130 14:12:42.311445 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.315451 kubelet[3190]: E0130 14:12:42.315416 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.315803 kubelet[3190]: W0130 14:12:42.315601 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.315803 kubelet[3190]: E0130 14:12:42.315629 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.315803 kubelet[3190]: I0130 14:12:42.315668 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9434d060-dc38-470e-9a84-12438e405d36-registration-dir\") pod \"csi-node-driver-sc286\" (UID: \"9434d060-dc38-470e-9a84-12438e405d36\") " pod="calico-system/csi-node-driver-sc286" Jan 30 14:12:42.316268 kubelet[3190]: E0130 14:12:42.316248 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.316449 kubelet[3190]: W0130 14:12:42.316357 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.316449 kubelet[3190]: E0130 14:12:42.316388 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.316449 kubelet[3190]: I0130 14:12:42.316411 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9434d060-dc38-470e-9a84-12438e405d36-socket-dir\") pod \"csi-node-driver-sc286\" (UID: \"9434d060-dc38-470e-9a84-12438e405d36\") " pod="calico-system/csi-node-driver-sc286" Jan 30 14:12:42.316698 kubelet[3190]: E0130 14:12:42.316670 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.316698 kubelet[3190]: W0130 14:12:42.316690 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.316867 kubelet[3190]: E0130 14:12:42.316714 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.317609 kubelet[3190]: E0130 14:12:42.317567 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.317609 kubelet[3190]: W0130 14:12:42.317593 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.317609 kubelet[3190]: E0130 14:12:42.317616 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.317884 kubelet[3190]: E0130 14:12:42.317864 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.317884 kubelet[3190]: W0130 14:12:42.317880 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.317951 kubelet[3190]: E0130 14:12:42.317902 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.319622 kubelet[3190]: E0130 14:12:42.319380 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.319622 kubelet[3190]: W0130 14:12:42.319402 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.319622 kubelet[3190]: E0130 14:12:42.319492 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.319622 kubelet[3190]: I0130 14:12:42.319529 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9434d060-dc38-470e-9a84-12438e405d36-kubelet-dir\") pod \"csi-node-driver-sc286\" (UID: \"9434d060-dc38-470e-9a84-12438e405d36\") " pod="calico-system/csi-node-driver-sc286" Jan 30 14:12:42.319970 kubelet[3190]: E0130 14:12:42.319951 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.319970 kubelet[3190]: W0130 14:12:42.319966 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.320050 kubelet[3190]: E0130 14:12:42.319984 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.320202 kubelet[3190]: E0130 14:12:42.320180 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.320202 kubelet[3190]: W0130 14:12:42.320196 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.320384 kubelet[3190]: E0130 14:12:42.320212 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.320434 kubelet[3190]: E0130 14:12:42.320414 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.320434 kubelet[3190]: W0130 14:12:42.320423 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.320478 kubelet[3190]: E0130 14:12:42.320443 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.320663 kubelet[3190]: E0130 14:12:42.320646 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.320663 kubelet[3190]: W0130 14:12:42.320660 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.320727 kubelet[3190]: E0130 14:12:42.320680 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.320727 kubelet[3190]: I0130 14:12:42.320699 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9434d060-dc38-470e-9a84-12438e405d36-varrun\") pod \"csi-node-driver-sc286\" (UID: \"9434d060-dc38-470e-9a84-12438e405d36\") " pod="calico-system/csi-node-driver-sc286" Jan 30 14:12:42.322477 kubelet[3190]: E0130 14:12:42.322443 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.322477 kubelet[3190]: W0130 14:12:42.322470 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.322704 kubelet[3190]: E0130 14:12:42.322587 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.322704 kubelet[3190]: I0130 14:12:42.322621 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzp5\" (UniqueName: \"kubernetes.io/projected/9434d060-dc38-470e-9a84-12438e405d36-kube-api-access-8vzp5\") pod \"csi-node-driver-sc286\" (UID: \"9434d060-dc38-470e-9a84-12438e405d36\") " pod="calico-system/csi-node-driver-sc286" Jan 30 14:12:42.322925 kubelet[3190]: E0130 14:12:42.322895 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.322925 kubelet[3190]: W0130 14:12:42.322919 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.323098 kubelet[3190]: E0130 14:12:42.323026 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.323131 kubelet[3190]: E0130 14:12:42.323115 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.323167 kubelet[3190]: W0130 14:12:42.323133 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.323345 kubelet[3190]: E0130 14:12:42.323213 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.323466 kubelet[3190]: E0130 14:12:42.323442 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.323466 kubelet[3190]: W0130 14:12:42.323459 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.323657 kubelet[3190]: E0130 14:12:42.323479 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.323913 kubelet[3190]: E0130 14:12:42.323884 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.323913 kubelet[3190]: W0130 14:12:42.323906 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.323990 kubelet[3190]: E0130 14:12:42.323929 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.324588 kubelet[3190]: E0130 14:12:42.324556 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.324588 kubelet[3190]: W0130 14:12:42.324578 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.324698 kubelet[3190]: E0130 14:12:42.324593 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.325577 kubelet[3190]: E0130 14:12:42.325541 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.325577 kubelet[3190]: W0130 14:12:42.325566 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.325577 kubelet[3190]: E0130 14:12:42.325579 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.325962 kubelet[3190]: E0130 14:12:42.325941 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.325962 kubelet[3190]: W0130 14:12:42.325956 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.326046 kubelet[3190]: E0130 14:12:42.325968 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.425891 kubelet[3190]: E0130 14:12:42.425826 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.425891 kubelet[3190]: W0130 14:12:42.425851 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.426387 kubelet[3190]: E0130 14:12:42.425971 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.426627 kubelet[3190]: E0130 14:12:42.426512 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.426627 kubelet[3190]: W0130 14:12:42.426542 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.426627 kubelet[3190]: E0130 14:12:42.426558 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.427045 kubelet[3190]: E0130 14:12:42.426986 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.427045 kubelet[3190]: W0130 14:12:42.426998 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.427315 kubelet[3190]: E0130 14:12:42.427014 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.427759 kubelet[3190]: E0130 14:12:42.427577 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.427759 kubelet[3190]: W0130 14:12:42.427591 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.427759 kubelet[3190]: E0130 14:12:42.427703 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.428567 kubelet[3190]: E0130 14:12:42.428300 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.428567 kubelet[3190]: W0130 14:12:42.428316 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.428567 kubelet[3190]: E0130 14:12:42.428375 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.428912 kubelet[3190]: E0130 14:12:42.428810 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.428912 kubelet[3190]: W0130 14:12:42.428824 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.429266 kubelet[3190]: E0130 14:12:42.429088 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.429437 kubelet[3190]: E0130 14:12:42.429424 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.429672 kubelet[3190]: W0130 14:12:42.429585 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.429672 kubelet[3190]: E0130 14:12:42.429608 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.430240 kubelet[3190]: E0130 14:12:42.430033 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.430240 kubelet[3190]: W0130 14:12:42.430047 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.430240 kubelet[3190]: E0130 14:12:42.430061 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.430682 kubelet[3190]: E0130 14:12:42.430489 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.430682 kubelet[3190]: W0130 14:12:42.430501 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.430904 kubelet[3190]: E0130 14:12:42.430852 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.431330 kubelet[3190]: E0130 14:12:42.431187 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.431330 kubelet[3190]: W0130 14:12:42.431200 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.431687 kubelet[3190]: E0130 14:12:42.431525 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.432374 kubelet[3190]: E0130 14:12:42.431870 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.432374 kubelet[3190]: W0130 14:12:42.431883 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.432374 kubelet[3190]: E0130 14:12:42.432255 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.433460 kubelet[3190]: E0130 14:12:42.433332 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.433460 kubelet[3190]: W0130 14:12:42.433349 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.433460 kubelet[3190]: E0130 14:12:42.433371 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.435174 kubelet[3190]: E0130 14:12:42.434082 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.435174 kubelet[3190]: W0130 14:12:42.434106 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.435174 kubelet[3190]: E0130 14:12:42.434382 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.435174 kubelet[3190]: W0130 14:12:42.434392 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.435174 kubelet[3190]: E0130 14:12:42.434556 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.435174 kubelet[3190]: W0130 14:12:42.434583 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.435174 kubelet[3190]: E0130 14:12:42.434596 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.435174 kubelet[3190]: E0130 14:12:42.434753 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.435174 kubelet[3190]: W0130 14:12:42.434761 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.435174 kubelet[3190]: E0130 14:12:42.434774 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.435174 kubelet[3190]: E0130 14:12:42.434971 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.435458 kubelet[3190]: W0130 14:12:42.435000 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.435458 kubelet[3190]: E0130 14:12:42.435027 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.436218 kubelet[3190]: E0130 14:12:42.435769 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.436218 kubelet[3190]: E0130 14:12:42.435989 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.436218 kubelet[3190]: W0130 14:12:42.435998 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.436218 kubelet[3190]: E0130 14:12:42.436009 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.436403 kubelet[3190]: E0130 14:12:42.436296 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.436403 kubelet[3190]: W0130 14:12:42.436307 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.436403 kubelet[3190]: E0130 14:12:42.436319 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.437047 kubelet[3190]: E0130 14:12:42.436556 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.437047 kubelet[3190]: W0130 14:12:42.436574 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.437047 kubelet[3190]: E0130 14:12:42.436585 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.437047 kubelet[3190]: E0130 14:12:42.436996 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.437501 kubelet[3190]: E0130 14:12:42.437475 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.437611 kubelet[3190]: W0130 14:12:42.437494 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.437649 kubelet[3190]: E0130 14:12:42.437616 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.438087 kubelet[3190]: E0130 14:12:42.437930 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.438087 kubelet[3190]: W0130 14:12:42.437948 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.438087 kubelet[3190]: E0130 14:12:42.437984 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.438438 kubelet[3190]: E0130 14:12:42.438410 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.438667 kubelet[3190]: W0130 14:12:42.438567 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.438667 kubelet[3190]: E0130 14:12:42.438613 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.438833 kubelet[3190]: E0130 14:12:42.438803 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.438833 kubelet[3190]: W0130 14:12:42.438819 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.439039 kubelet[3190]: E0130 14:12:42.439011 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.439039 kubelet[3190]: E0130 14:12:42.439083 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.439039 kubelet[3190]: W0130 14:12:42.439091 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.439039 kubelet[3190]: E0130 14:12:42.439107 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.439463 kubelet[3190]: E0130 14:12:42.439401 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.439463 kubelet[3190]: W0130 14:12:42.439414 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.439463 kubelet[3190]: E0130 14:12:42.439426 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.439924 kubelet[3190]: E0130 14:12:42.439789 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.439924 kubelet[3190]: W0130 14:12:42.439803 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.439924 kubelet[3190]: E0130 14:12:42.439815 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.440132 kubelet[3190]: E0130 14:12:42.440085 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.440132 kubelet[3190]: W0130 14:12:42.440096 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.440132 kubelet[3190]: E0130 14:12:42.440106 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.455282 kubelet[3190]: E0130 14:12:42.455159 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.455282 kubelet[3190]: W0130 14:12:42.455185 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.455282 kubelet[3190]: E0130 14:12:42.455209 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.537398 kubelet[3190]: E0130 14:12:42.537279 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.537398 kubelet[3190]: W0130 14:12:42.537305 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.537398 kubelet[3190]: E0130 14:12:42.537326 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.537708 kubelet[3190]: E0130 14:12:42.537685 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.537708 kubelet[3190]: W0130 14:12:42.537701 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.538176 kubelet[3190]: E0130 14:12:42.537713 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.539258 kubelet[3190]: E0130 14:12:42.539218 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.539258 kubelet[3190]: W0130 14:12:42.539253 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.539371 kubelet[3190]: E0130 14:12:42.539267 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.639974 kubelet[3190]: E0130 14:12:42.639940 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.639974 kubelet[3190]: W0130 14:12:42.639966 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.640189 kubelet[3190]: E0130 14:12:42.639986 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.640189 kubelet[3190]: E0130 14:12:42.640167 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.640189 kubelet[3190]: W0130 14:12:42.640175 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.640189 kubelet[3190]: E0130 14:12:42.640185 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.640426 kubelet[3190]: E0130 14:12:42.640407 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.640426 kubelet[3190]: W0130 14:12:42.640423 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.640483 kubelet[3190]: E0130 14:12:42.640433 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.742004 kubelet[3190]: E0130 14:12:42.741875 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.742004 kubelet[3190]: W0130 14:12:42.741900 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.742004 kubelet[3190]: E0130 14:12:42.741923 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.743807 kubelet[3190]: E0130 14:12:42.743615 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.743807 kubelet[3190]: W0130 14:12:42.743639 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.743807 kubelet[3190]: E0130 14:12:42.743662 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.744780 kubelet[3190]: E0130 14:12:42.744645 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.744780 kubelet[3190]: W0130 14:12:42.744664 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.744780 kubelet[3190]: E0130 14:12:42.744683 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.845425 kubelet[3190]: E0130 14:12:42.845315 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.845425 kubelet[3190]: W0130 14:12:42.845342 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.845425 kubelet[3190]: E0130 14:12:42.845362 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.845605 kubelet[3190]: E0130 14:12:42.845557 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.845605 kubelet[3190]: W0130 14:12:42.845566 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.845605 kubelet[3190]: E0130 14:12:42.845576 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.846218 kubelet[3190]: E0130 14:12:42.845736 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.846218 kubelet[3190]: W0130 14:12:42.845748 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.846218 kubelet[3190]: E0130 14:12:42.845757 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.946430 kubelet[3190]: E0130 14:12:42.946397 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.946430 kubelet[3190]: W0130 14:12:42.946423 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.946600 kubelet[3190]: E0130 14:12:42.946443 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.946627 kubelet[3190]: E0130 14:12:42.946621 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.946650 kubelet[3190]: W0130 14:12:42.946630 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.946650 kubelet[3190]: E0130 14:12:42.946640 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:42.946804 kubelet[3190]: E0130 14:12:42.946791 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:42.946804 kubelet[3190]: W0130 14:12:42.946802 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:42.946857 kubelet[3190]: E0130 14:12:42.946811 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.047598 kubelet[3190]: E0130 14:12:43.047468 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.047598 kubelet[3190]: W0130 14:12:43.047492 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.047598 kubelet[3190]: E0130 14:12:43.047513 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.048055 kubelet[3190]: E0130 14:12:43.047688 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.048055 kubelet[3190]: W0130 14:12:43.047696 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.048055 kubelet[3190]: E0130 14:12:43.047705 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.048055 kubelet[3190]: E0130 14:12:43.047847 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.048055 kubelet[3190]: W0130 14:12:43.047856 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.048055 kubelet[3190]: E0130 14:12:43.047863 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.114120 kubelet[3190]: E0130 14:12:43.113877 3190 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:12:43.114120 kubelet[3190]: E0130 14:12:43.113980 3190 secret.go:188] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 14:12:43.114323 kubelet[3190]: E0130 14:12:43.114065 3190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b0f808-3fd9-47fe-9697-0efbcb4e10ce-typha-certs podName:d0b0f808-3fd9-47fe-9697-0efbcb4e10ce nodeName:}" failed. No retries permitted until 2025-01-30 14:12:43.614040678 +0000 UTC m=+17.698168955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/d0b0f808-3fd9-47fe-9697-0efbcb4e10ce-typha-certs") pod "calico-typha-6654d894db-k6trz" (UID: "d0b0f808-3fd9-47fe-9697-0efbcb4e10ce") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:12:43.114386 kubelet[3190]: E0130 14:12:43.114342 3190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d0b0f808-3fd9-47fe-9697-0efbcb4e10ce-tigera-ca-bundle podName:d0b0f808-3fd9-47fe-9697-0efbcb4e10ce nodeName:}" failed. No retries permitted until 2025-01-30 14:12:43.614327918 +0000 UTC m=+17.698456155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/d0b0f808-3fd9-47fe-9697-0efbcb4e10ce-tigera-ca-bundle") pod "calico-typha-6654d894db-k6trz" (UID: "d0b0f808-3fd9-47fe-9697-0efbcb4e10ce") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:12:43.148911 kubelet[3190]: E0130 14:12:43.148870 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.148911 kubelet[3190]: W0130 14:12:43.148898 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.148911 kubelet[3190]: E0130 14:12:43.148921 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.149196 kubelet[3190]: E0130 14:12:43.149098 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.149196 kubelet[3190]: W0130 14:12:43.149106 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.149196 kubelet[3190]: E0130 14:12:43.149115 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.149342 kubelet[3190]: E0130 14:12:43.149300 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.149342 kubelet[3190]: W0130 14:12:43.149309 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.149342 kubelet[3190]: E0130 14:12:43.149319 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.215570 kubelet[3190]: E0130 14:12:43.215255 3190 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:12:43.215570 kubelet[3190]: E0130 14:12:43.215341 3190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-tigera-ca-bundle podName:c0346cc6-7e2c-4b70-b6cf-2835661b16e7 nodeName:}" failed. No retries permitted until 2025-01-30 14:12:43.715320902 +0000 UTC m=+17.799449179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/c0346cc6-7e2c-4b70-b6cf-2835661b16e7-tigera-ca-bundle") pod "calico-node-kqpqj" (UID: "c0346cc6-7e2c-4b70-b6cf-2835661b16e7") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:12:43.250615 kubelet[3190]: E0130 14:12:43.250575 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.250901 kubelet[3190]: W0130 14:12:43.250780 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.250901 kubelet[3190]: E0130 14:12:43.250809 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.251248 kubelet[3190]: E0130 14:12:43.251135 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.251248 kubelet[3190]: W0130 14:12:43.251154 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.251248 kubelet[3190]: E0130 14:12:43.251166 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.251578 kubelet[3190]: E0130 14:12:43.251506 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.251578 kubelet[3190]: W0130 14:12:43.251524 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.251578 kubelet[3190]: E0130 14:12:43.251537 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.352596 kubelet[3190]: E0130 14:12:43.352470 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.352596 kubelet[3190]: W0130 14:12:43.352498 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.352596 kubelet[3190]: E0130 14:12:43.352517 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.352822 kubelet[3190]: E0130 14:12:43.352707 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.352822 kubelet[3190]: W0130 14:12:43.352717 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.352822 kubelet[3190]: E0130 14:12:43.352727 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.352921 kubelet[3190]: E0130 14:12:43.352902 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.352947 kubelet[3190]: W0130 14:12:43.352910 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.352947 kubelet[3190]: E0130 14:12:43.352931 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.454129 kubelet[3190]: E0130 14:12:43.453971 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.454129 kubelet[3190]: W0130 14:12:43.454011 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.454129 kubelet[3190]: E0130 14:12:43.454031 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.455009 kubelet[3190]: E0130 14:12:43.454845 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.455009 kubelet[3190]: W0130 14:12:43.454869 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.455009 kubelet[3190]: E0130 14:12:43.454885 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.455248 kubelet[3190]: E0130 14:12:43.455117 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.455248 kubelet[3190]: W0130 14:12:43.455127 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.455248 kubelet[3190]: E0130 14:12:43.455140 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.555995 kubelet[3190]: E0130 14:12:43.555844 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.555995 kubelet[3190]: W0130 14:12:43.555870 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.555995 kubelet[3190]: E0130 14:12:43.555892 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.556358 kubelet[3190]: E0130 14:12:43.556275 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.556358 kubelet[3190]: W0130 14:12:43.556288 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.556358 kubelet[3190]: E0130 14:12:43.556299 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.556734 kubelet[3190]: E0130 14:12:43.556661 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.556734 kubelet[3190]: W0130 14:12:43.556674 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.556734 kubelet[3190]: E0130 14:12:43.556686 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.658177 kubelet[3190]: E0130 14:12:43.658064 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.658177 kubelet[3190]: W0130 14:12:43.658094 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.658177 kubelet[3190]: E0130 14:12:43.658124 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.658732 kubelet[3190]: E0130 14:12:43.658455 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.658732 kubelet[3190]: W0130 14:12:43.658469 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.658732 kubelet[3190]: E0130 14:12:43.658496 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.658732 kubelet[3190]: E0130 14:12:43.658720 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.659044 kubelet[3190]: W0130 14:12:43.658737 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.659044 kubelet[3190]: E0130 14:12:43.658763 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.659395 kubelet[3190]: E0130 14:12:43.659357 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.659607 kubelet[3190]: W0130 14:12:43.659532 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.659607 kubelet[3190]: E0130 14:12:43.659580 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.659938 kubelet[3190]: E0130 14:12:43.659909 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.659938 kubelet[3190]: W0130 14:12:43.659932 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.660108 kubelet[3190]: E0130 14:12:43.659960 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.660204 kubelet[3190]: E0130 14:12:43.660186 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.660204 kubelet[3190]: W0130 14:12:43.660198 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.660463 kubelet[3190]: E0130 14:12:43.660251 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.660939 kubelet[3190]: E0130 14:12:43.660810 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.660939 kubelet[3190]: W0130 14:12:43.660826 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.660939 kubelet[3190]: E0130 14:12:43.660855 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.661529 kubelet[3190]: E0130 14:12:43.661344 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.661529 kubelet[3190]: W0130 14:12:43.661388 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.661529 kubelet[3190]: E0130 14:12:43.661419 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.662126 kubelet[3190]: E0130 14:12:43.662013 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.662126 kubelet[3190]: W0130 14:12:43.662039 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.662422 kubelet[3190]: E0130 14:12:43.662154 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.662904 kubelet[3190]: E0130 14:12:43.662667 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.662904 kubelet[3190]: W0130 14:12:43.662694 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.662904 kubelet[3190]: E0130 14:12:43.662711 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.663249 kubelet[3190]: E0130 14:12:43.663212 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.663998 kubelet[3190]: W0130 14:12:43.663372 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.663998 kubelet[3190]: E0130 14:12:43.663404 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.664941 kubelet[3190]: E0130 14:12:43.664916 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.665051 kubelet[3190]: W0130 14:12:43.665039 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.665144 kubelet[3190]: E0130 14:12:43.665123 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.668805 kubelet[3190]: E0130 14:12:43.668754 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.668805 kubelet[3190]: W0130 14:12:43.668792 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.668805 kubelet[3190]: E0130 14:12:43.668824 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.760597 kubelet[3190]: E0130 14:12:43.760316 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.760597 kubelet[3190]: W0130 14:12:43.760353 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.760597 kubelet[3190]: E0130 14:12:43.760380 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.761540 kubelet[3190]: E0130 14:12:43.761350 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.761937 kubelet[3190]: W0130 14:12:43.761729 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.762578 kubelet[3190]: E0130 14:12:43.761766 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.762795 kubelet[3190]: E0130 14:12:43.762771 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.762984 kubelet[3190]: W0130 14:12:43.762885 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.762984 kubelet[3190]: E0130 14:12:43.762916 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.763430 kubelet[3190]: E0130 14:12:43.763305 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.763430 kubelet[3190]: W0130 14:12:43.763320 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.763430 kubelet[3190]: E0130 14:12:43.763340 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.764251 kubelet[3190]: E0130 14:12:43.763821 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.764251 kubelet[3190]: W0130 14:12:43.763846 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.764251 kubelet[3190]: E0130 14:12:43.763866 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.765203 kubelet[3190]: E0130 14:12:43.765164 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:43.765203 kubelet[3190]: W0130 14:12:43.765188 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:43.765393 kubelet[3190]: E0130 14:12:43.765210 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:43.766813 containerd[1721]: time="2025-01-30T14:12:43.766768709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6654d894db-k6trz,Uid:d0b0f808-3fd9-47fe-9697-0efbcb4e10ce,Namespace:calico-system,Attempt:0,}" Jan 30 14:12:43.810216 containerd[1721]: time="2025-01-30T14:12:43.809982919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:43.810216 containerd[1721]: time="2025-01-30T14:12:43.810055119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:43.810216 containerd[1721]: time="2025-01-30T14:12:43.810075759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:43.810216 containerd[1721]: time="2025-01-30T14:12:43.810181319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:43.838423 systemd[1]: Started cri-containerd-116a455a02671405df4c7e8849071b43eaa393bf87b5c137ca6faa78fffab205.scope - libcontainer container 116a455a02671405df4c7e8849071b43eaa393bf87b5c137ca6faa78fffab205. Jan 30 14:12:43.868048 containerd[1721]: time="2025-01-30T14:12:43.867975092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6654d894db-k6trz,Uid:d0b0f808-3fd9-47fe-9697-0efbcb4e10ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"116a455a02671405df4c7e8849071b43eaa393bf87b5c137ca6faa78fffab205\"" Jan 30 14:12:43.871847 containerd[1721]: time="2025-01-30T14:12:43.871576493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:12:43.884120 containerd[1721]: time="2025-01-30T14:12:43.884075056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kqpqj,Uid:c0346cc6-7e2c-4b70-b6cf-2835661b16e7,Namespace:calico-system,Attempt:0,}" Jan 30 14:12:43.935470 containerd[1721]: time="2025-01-30T14:12:43.935216308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:43.935470 containerd[1721]: time="2025-01-30T14:12:43.935334748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:43.935470 containerd[1721]: time="2025-01-30T14:12:43.935359148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:43.935976 containerd[1721]: time="2025-01-30T14:12:43.935520948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:43.955474 systemd[1]: Started cri-containerd-2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9.scope - libcontainer container 2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9. Jan 30 14:12:43.980167 containerd[1721]: time="2025-01-30T14:12:43.980008878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kqpqj,Uid:c0346cc6-7e2c-4b70-b6cf-2835661b16e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9\"" Jan 30 14:12:44.042767 kubelet[3190]: E0130 14:12:44.042447 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:45.036948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322275954.mount: Deactivated successfully. Jan 30 14:12:45.616086 containerd[1721]: time="2025-01-30T14:12:45.616032856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:45.618813 containerd[1721]: time="2025-01-30T14:12:45.618736417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 14:12:45.622433 containerd[1721]: time="2025-01-30T14:12:45.621967218Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:45.628650 containerd[1721]: time="2025-01-30T14:12:45.628591979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:45.629645 containerd[1721]: time="2025-01-30T14:12:45.629602779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.757979846s" Jan 30 14:12:45.629645 containerd[1721]: time="2025-01-30T14:12:45.629640979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 14:12:45.635494 containerd[1721]: time="2025-01-30T14:12:45.635451221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:12:45.647648 containerd[1721]: time="2025-01-30T14:12:45.647591784Z" level=info msg="CreateContainer within sandbox \"116a455a02671405df4c7e8849071b43eaa393bf87b5c137ca6faa78fffab205\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:12:45.694799 containerd[1721]: time="2025-01-30T14:12:45.694751594Z" level=info msg="CreateContainer within sandbox \"116a455a02671405df4c7e8849071b43eaa393bf87b5c137ca6faa78fffab205\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"773c6543981ea1f050de7925c3a730dd4b30fd5cbfdca5e8c96c55c43807908a\"" Jan 30 14:12:45.695689 containerd[1721]: time="2025-01-30T14:12:45.695487275Z" level=info msg="StartContainer for \"773c6543981ea1f050de7925c3a730dd4b30fd5cbfdca5e8c96c55c43807908a\"" Jan 30 14:12:45.724856 systemd[1]: Started cri-containerd-773c6543981ea1f050de7925c3a730dd4b30fd5cbfdca5e8c96c55c43807908a.scope - libcontainer container 773c6543981ea1f050de7925c3a730dd4b30fd5cbfdca5e8c96c55c43807908a. Jan 30 14:12:45.767204 containerd[1721]: time="2025-01-30T14:12:45.766997371Z" level=info msg="StartContainer for \"773c6543981ea1f050de7925c3a730dd4b30fd5cbfdca5e8c96c55c43807908a\" returns successfully" Jan 30 14:12:46.041926 kubelet[3190]: E0130 14:12:46.041566 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:46.140203 kubelet[3190]: E0130 14:12:46.140098 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.140203 kubelet[3190]: W0130 14:12:46.140123 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.140203 kubelet[3190]: E0130 14:12:46.140144 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.140430 kubelet[3190]: E0130 14:12:46.140352 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.140430 kubelet[3190]: W0130 14:12:46.140361 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.140430 kubelet[3190]: E0130 14:12:46.140371 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.140543 kubelet[3190]: E0130 14:12:46.140530 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.140543 kubelet[3190]: W0130 14:12:46.140541 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.140607 kubelet[3190]: E0130 14:12:46.140550 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.140747 kubelet[3190]: E0130 14:12:46.140733 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.140747 kubelet[3190]: W0130 14:12:46.140745 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.140798 kubelet[3190]: E0130 14:12:46.140756 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.140939 kubelet[3190]: E0130 14:12:46.140924 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.140978 kubelet[3190]: W0130 14:12:46.140947 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.140978 kubelet[3190]: E0130 14:12:46.140956 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.141170 kubelet[3190]: E0130 14:12:46.141147 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.141170 kubelet[3190]: W0130 14:12:46.141160 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.141170 kubelet[3190]: E0130 14:12:46.141169 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.141392 kubelet[3190]: E0130 14:12:46.141376 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.141392 kubelet[3190]: W0130 14:12:46.141391 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.141459 kubelet[3190]: E0130 14:12:46.141401 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.141561 kubelet[3190]: E0130 14:12:46.141548 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.141561 kubelet[3190]: W0130 14:12:46.141560 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.141612 kubelet[3190]: E0130 14:12:46.141570 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.141735 kubelet[3190]: E0130 14:12:46.141723 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.141735 kubelet[3190]: W0130 14:12:46.141734 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.141797 kubelet[3190]: E0130 14:12:46.141742 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.141947 kubelet[3190]: E0130 14:12:46.141901 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.141947 kubelet[3190]: W0130 14:12:46.141923 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.141947 kubelet[3190]: E0130 14:12:46.141932 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.142175 kubelet[3190]: E0130 14:12:46.142133 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.142175 kubelet[3190]: W0130 14:12:46.142143 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.142175 kubelet[3190]: E0130 14:12:46.142152 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.143133 kubelet[3190]: E0130 14:12:46.143093 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.143133 kubelet[3190]: W0130 14:12:46.143125 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.143133 kubelet[3190]: E0130 14:12:46.143139 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.144378 kubelet[3190]: E0130 14:12:46.143380 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.144378 kubelet[3190]: W0130 14:12:46.143395 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.144378 kubelet[3190]: E0130 14:12:46.143404 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.144378 kubelet[3190]: E0130 14:12:46.143557 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.144378 kubelet[3190]: W0130 14:12:46.143565 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.144378 kubelet[3190]: E0130 14:12:46.143573 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.144378 kubelet[3190]: I0130 14:12:46.143910 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6654d894db-k6trz" podStartSLOduration=3.382992571 podStartE2EDuration="5.143897738s" podCreationTimestamp="2025-01-30 14:12:41 +0000 UTC" firstStartedPulling="2025-01-30 14:12:43.870005173 +0000 UTC m=+17.954133410" lastFinishedPulling="2025-01-30 14:12:45.6309103 +0000 UTC m=+19.715038577" observedRunningTime="2025-01-30 14:12:46.143622218 +0000 UTC m=+20.227750495" watchObservedRunningTime="2025-01-30 14:12:46.143897738 +0000 UTC m=+20.228026015" Jan 30 14:12:46.144928 kubelet[3190]: E0130 14:12:46.144905 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.144928 kubelet[3190]: W0130 14:12:46.144924 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.145003 kubelet[3190]: E0130 14:12:46.144940 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.179864 kubelet[3190]: E0130 14:12:46.179833 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.180033 kubelet[3190]: W0130 14:12:46.180017 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.180441 kubelet[3190]: E0130 14:12:46.180085 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.180963 kubelet[3190]: E0130 14:12:46.180931 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.180963 kubelet[3190]: W0130 14:12:46.180954 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.181042 kubelet[3190]: E0130 14:12:46.180972 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.181258 kubelet[3190]: E0130 14:12:46.181242 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.181314 kubelet[3190]: W0130 14:12:46.181257 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.181314 kubelet[3190]: E0130 14:12:46.181277 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.181865 kubelet[3190]: E0130 14:12:46.181836 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.181925 kubelet[3190]: W0130 14:12:46.181869 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.181925 kubelet[3190]: E0130 14:12:46.181883 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.183298 kubelet[3190]: E0130 14:12:46.182055 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.183298 kubelet[3190]: W0130 14:12:46.182075 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.183298 kubelet[3190]: E0130 14:12:46.182084 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.183298 kubelet[3190]: E0130 14:12:46.182260 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.183298 kubelet[3190]: W0130 14:12:46.182271 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.183298 kubelet[3190]: E0130 14:12:46.182280 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.183298 kubelet[3190]: E0130 14:12:46.182461 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.183298 kubelet[3190]: W0130 14:12:46.182482 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.183298 kubelet[3190]: E0130 14:12:46.182492 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.183298 kubelet[3190]: E0130 14:12:46.182847 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.183557 kubelet[3190]: W0130 14:12:46.182859 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.183557 kubelet[3190]: E0130 14:12:46.182883 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.183782 kubelet[3190]: E0130 14:12:46.183756 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.183782 kubelet[3190]: W0130 14:12:46.183775 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.183838 kubelet[3190]: E0130 14:12:46.183789 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.183965 kubelet[3190]: E0130 14:12:46.183946 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.183965 kubelet[3190]: W0130 14:12:46.183959 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.184023 kubelet[3190]: E0130 14:12:46.183975 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.184120 kubelet[3190]: E0130 14:12:46.184104 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.184120 kubelet[3190]: W0130 14:12:46.184115 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.184169 kubelet[3190]: E0130 14:12:46.184128 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.184363 kubelet[3190]: E0130 14:12:46.184344 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.184363 kubelet[3190]: W0130 14:12:46.184359 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.184438 kubelet[3190]: E0130 14:12:46.184375 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.184747 kubelet[3190]: E0130 14:12:46.184721 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.184747 kubelet[3190]: W0130 14:12:46.184738 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.184805 kubelet[3190]: E0130 14:12:46.184749 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.185618 kubelet[3190]: E0130 14:12:46.185592 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.185618 kubelet[3190]: W0130 14:12:46.185611 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.185703 kubelet[3190]: E0130 14:12:46.185630 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.185796 kubelet[3190]: E0130 14:12:46.185776 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.185796 kubelet[3190]: W0130 14:12:46.185789 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.185847 kubelet[3190]: E0130 14:12:46.185799 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.185952 kubelet[3190]: E0130 14:12:46.185927 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.185952 kubelet[3190]: W0130 14:12:46.185933 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.185952 kubelet[3190]: E0130 14:12:46.185941 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.186110 kubelet[3190]: E0130 14:12:46.186086 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.186110 kubelet[3190]: W0130 14:12:46.186100 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.186161 kubelet[3190]: E0130 14:12:46.186125 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:46.186637 kubelet[3190]: E0130 14:12:46.186610 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:46.186637 kubelet[3190]: W0130 14:12:46.186631 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:46.186714 kubelet[3190]: E0130 14:12:46.186643 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.092272 containerd[1721]: time="2025-01-30T14:12:47.091685837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:47.093931 containerd[1721]: time="2025-01-30T14:12:47.093889998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 14:12:47.097922 containerd[1721]: time="2025-01-30T14:12:47.097866279Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:47.106782 containerd[1721]: time="2025-01-30T14:12:47.106466641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:47.107292 containerd[1721]: time="2025-01-30T14:12:47.107218161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.47130038s" Jan 30 14:12:47.107411 containerd[1721]: time="2025-01-30T14:12:47.107371081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 14:12:47.109916 containerd[1721]: time="2025-01-30T14:12:47.109851641Z" level=info msg="CreateContainer within sandbox \"2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:12:47.131547 kubelet[3190]: I0130 14:12:47.131513 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:12:47.149833 containerd[1721]: time="2025-01-30T14:12:47.149702051Z" level=info msg="CreateContainer within sandbox \"2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403\"" Jan 30 14:12:47.151627 containerd[1721]: time="2025-01-30T14:12:47.150363571Z" level=info msg="StartContainer for \"5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403\"" Jan 30 14:12:47.152041 kubelet[3190]: E0130 14:12:47.152014 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.152041 kubelet[3190]: W0130 14:12:47.152037 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.152205 kubelet[3190]: E0130 14:12:47.152061 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.152205 kubelet[3190]: E0130 14:12:47.152202 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.152329 kubelet[3190]: W0130 14:12:47.152209 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.152329 kubelet[3190]: E0130 14:12:47.152218 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.152420 kubelet[3190]: E0130 14:12:47.152388 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.152420 kubelet[3190]: W0130 14:12:47.152396 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.152420 kubelet[3190]: E0130 14:12:47.152405 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.152550 kubelet[3190]: E0130 14:12:47.152534 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.152550 kubelet[3190]: W0130 14:12:47.152541 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.152631 kubelet[3190]: E0130 14:12:47.152549 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.152794 kubelet[3190]: E0130 14:12:47.152765 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.152794 kubelet[3190]: W0130 14:12:47.152775 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.152794 kubelet[3190]: E0130 14:12:47.152784 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.153116 kubelet[3190]: E0130 14:12:47.152939 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.153116 kubelet[3190]: W0130 14:12:47.152947 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.153116 kubelet[3190]: E0130 14:12:47.152956 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.153116 kubelet[3190]: E0130 14:12:47.153075 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.153116 kubelet[3190]: W0130 14:12:47.153082 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.153116 kubelet[3190]: E0130 14:12:47.153089 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.153947 kubelet[3190]: E0130 14:12:47.153211 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.153947 kubelet[3190]: W0130 14:12:47.153218 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.153947 kubelet[3190]: E0130 14:12:47.153241 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.153947 kubelet[3190]: E0130 14:12:47.153382 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.153947 kubelet[3190]: W0130 14:12:47.153390 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.153947 kubelet[3190]: E0130 14:12:47.153397 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.153947 kubelet[3190]: E0130 14:12:47.153521 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.153947 kubelet[3190]: W0130 14:12:47.153528 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.153947 kubelet[3190]: E0130 14:12:47.153535 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.153947 kubelet[3190]: E0130 14:12:47.153654 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.155108 kubelet[3190]: W0130 14:12:47.153661 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.155108 kubelet[3190]: E0130 14:12:47.153668 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.155108 kubelet[3190]: E0130 14:12:47.153815 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.155108 kubelet[3190]: W0130 14:12:47.153824 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.155108 kubelet[3190]: E0130 14:12:47.153834 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.155108 kubelet[3190]: E0130 14:12:47.153986 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.155108 kubelet[3190]: W0130 14:12:47.153993 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.155108 kubelet[3190]: E0130 14:12:47.154001 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.155108 kubelet[3190]: E0130 14:12:47.154129 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.155108 kubelet[3190]: W0130 14:12:47.154136 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.155372 kubelet[3190]: E0130 14:12:47.154143 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.155372 kubelet[3190]: E0130 14:12:47.154291 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.155372 kubelet[3190]: W0130 14:12:47.154299 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.155372 kubelet[3190]: E0130 14:12:47.154307 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.186445 systemd[1]: Started cri-containerd-5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403.scope - libcontainer container 5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403. Jan 30 14:12:47.188723 kubelet[3190]: E0130 14:12:47.188692 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.188723 kubelet[3190]: W0130 14:12:47.188715 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.188879 kubelet[3190]: E0130 14:12:47.188735 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.188965 kubelet[3190]: E0130 14:12:47.188937 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.188965 kubelet[3190]: W0130 14:12:47.188952 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.189101 kubelet[3190]: E0130 14:12:47.188969 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.189243 kubelet[3190]: E0130 14:12:47.189204 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.189243 kubelet[3190]: W0130 14:12:47.189218 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.190017 kubelet[3190]: E0130 14:12:47.189252 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.190017 kubelet[3190]: E0130 14:12:47.189459 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.190017 kubelet[3190]: W0130 14:12:47.189468 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.190017 kubelet[3190]: E0130 14:12:47.189477 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.190017 kubelet[3190]: E0130 14:12:47.189651 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.190017 kubelet[3190]: W0130 14:12:47.189659 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.190017 kubelet[3190]: E0130 14:12:47.189668 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.190017 kubelet[3190]: E0130 14:12:47.189783 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.190017 kubelet[3190]: W0130 14:12:47.189790 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.190017 kubelet[3190]: E0130 14:12:47.189797 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191546 kubelet[3190]: E0130 14:12:47.190072 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191546 kubelet[3190]: W0130 14:12:47.190083 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191546 kubelet[3190]: E0130 14:12:47.190147 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191546 kubelet[3190]: E0130 14:12:47.190522 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191546 kubelet[3190]: W0130 14:12:47.190532 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191546 kubelet[3190]: E0130 14:12:47.190736 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191546 kubelet[3190]: W0130 14:12:47.190747 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191546 kubelet[3190]: E0130 14:12:47.190885 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191546 kubelet[3190]: W0130 14:12:47.190891 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191546 kubelet[3190]: E0130 14:12:47.190901 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191754 kubelet[3190]: E0130 14:12:47.190927 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191754 kubelet[3190]: E0130 14:12:47.191118 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191754 kubelet[3190]: W0130 14:12:47.191125 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191754 kubelet[3190]: E0130 14:12:47.191135 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191754 kubelet[3190]: E0130 14:12:47.191293 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191754 kubelet[3190]: W0130 14:12:47.191303 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191754 kubelet[3190]: E0130 14:12:47.191312 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191754 kubelet[3190]: E0130 14:12:47.191469 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191754 kubelet[3190]: W0130 14:12:47.191476 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191754 kubelet[3190]: E0130 14:12:47.191484 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191957 kubelet[3190]: E0130 14:12:47.191800 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.191957 kubelet[3190]: W0130 14:12:47.191809 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.191957 kubelet[3190]: E0130 14:12:47.191818 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.191957 kubelet[3190]: E0130 14:12:47.191859 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.192040 kubelet[3190]: E0130 14:12:47.192032 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.192062 kubelet[3190]: W0130 14:12:47.192041 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.192062 kubelet[3190]: E0130 14:12:47.192052 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.192411 kubelet[3190]: E0130 14:12:47.192349 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.192411 kubelet[3190]: W0130 14:12:47.192367 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.192411 kubelet[3190]: E0130 14:12:47.192378 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.192737 kubelet[3190]: E0130 14:12:47.192597 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.192737 kubelet[3190]: W0130 14:12:47.192606 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.192737 kubelet[3190]: E0130 14:12:47.192616 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.193113 kubelet[3190]: E0130 14:12:47.193093 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:12:47.193113 kubelet[3190]: W0130 14:12:47.193111 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:12:47.193194 kubelet[3190]: E0130 14:12:47.193135 3190 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:12:47.220574 containerd[1721]: time="2025-01-30T14:12:47.220449747Z" level=info msg="StartContainer for \"5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403\" returns successfully" Jan 30 14:12:47.232108 systemd[1]: cri-containerd-5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403.scope: Deactivated successfully. Jan 30 14:12:47.258094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403-rootfs.mount: Deactivated successfully. Jan 30 14:12:48.042247 kubelet[3190]: E0130 14:12:48.041870 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:48.171302 containerd[1721]: time="2025-01-30T14:12:48.171244287Z" level=info msg="shim disconnected" id=5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403 namespace=k8s.io Jan 30 14:12:48.171855 containerd[1721]: time="2025-01-30T14:12:48.171694647Z" level=warning msg="cleaning up after shim disconnected" id=5fed2899bcfbef61fc29f46edddf966709f7068fa4b6633c35525df709dae403 namespace=k8s.io Jan 30 14:12:48.171855 containerd[1721]: time="2025-01-30T14:12:48.171718487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:12:49.140518 containerd[1721]: time="2025-01-30T14:12:49.140422550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:12:50.041855 kubelet[3190]: E0130 14:12:50.041413 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:51.981916 containerd[1721]: time="2025-01-30T14:12:51.981694846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:51.983838 containerd[1721]: time="2025-01-30T14:12:51.983768926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 14:12:51.987341 containerd[1721]: time="2025-01-30T14:12:51.987285367Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:51.991839 containerd[1721]: time="2025-01-30T14:12:51.991790568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:51.992807 containerd[1721]: time="2025-01-30T14:12:51.992678849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.852086819s" Jan 30 14:12:51.992807 containerd[1721]: time="2025-01-30T14:12:51.992713089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 14:12:51.996385 containerd[1721]: time="2025-01-30T14:12:51.996334090Z" level=info msg="CreateContainer within sandbox \"2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:12:52.030636 containerd[1721]: time="2025-01-30T14:12:52.030543060Z" level=info msg="CreateContainer within sandbox \"2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd\"" Jan 30 14:12:52.031444 containerd[1721]: time="2025-01-30T14:12:52.031145540Z" level=info msg="StartContainer for \"862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd\"" Jan 30 14:12:52.041258 kubelet[3190]: E0130 14:12:52.041021 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:52.068430 systemd[1]: Started cri-containerd-862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd.scope - libcontainer container 862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd. Jan 30 14:12:52.099038 containerd[1721]: time="2025-01-30T14:12:52.098898439Z" level=info msg="StartContainer for \"862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd\" returns successfully" Jan 30 14:12:53.166703 systemd[1]: cri-containerd-862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd.scope: Deactivated successfully. Jan 30 14:12:53.184649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd-rootfs.mount: Deactivated successfully. Jan 30 14:12:53.196995 kubelet[3190]: I0130 14:12:53.196786 3190 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 14:12:53.504955 kubelet[3190]: W0130 14:12:53.256507 3190 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-a-1247579205" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object Jan 30 14:12:53.504955 kubelet[3190]: E0130 14:12:53.256543 3190 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.0-a-1247579205\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object" logger="UnhandledError" Jan 30 14:12:53.504955 kubelet[3190]: W0130 14:12:53.256611 3190 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.0-a-1247579205" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object Jan 30 14:12:53.504955 kubelet[3190]: E0130 14:12:53.256628 3190 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.0-a-1247579205\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.0-a-1247579205' and this object" logger="UnhandledError" Jan 30 14:12:53.250028 systemd[1]: Created slice kubepods-burstable-pod8f59b1d6_1b79_481b_b739_8d25c393b80d.slice - libcontainer container kubepods-burstable-pod8f59b1d6_1b79_481b_b739_8d25c393b80d.slice. Jan 30 14:12:53.505273 kubelet[3190]: I0130 14:12:53.332925 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b6c2c7a-4ead-45d0-9c40-04e67f8b365b-tigera-ca-bundle\") pod \"calico-kube-controllers-6d47d89cdf-pw5ns\" (UID: \"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b\") " pod="calico-system/calico-kube-controllers-6d47d89cdf-pw5ns" Jan 30 14:12:53.505273 kubelet[3190]: I0130 14:12:53.332966 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbs5\" (UniqueName: \"kubernetes.io/projected/7b1fe55d-d807-4720-87c5-71fc636c7ec3-kube-api-access-ftbs5\") pod \"coredns-6f6b679f8f-wzldc\" (UID: \"7b1fe55d-d807-4720-87c5-71fc636c7ec3\") " pod="kube-system/coredns-6f6b679f8f-wzldc" Jan 30 14:12:53.505273 kubelet[3190]: I0130 14:12:53.332989 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e-calico-apiserver-certs\") pod \"calico-apiserver-598ff764fd-6p78s\" (UID: \"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e\") " pod="calico-apiserver/calico-apiserver-598ff764fd-6p78s" Jan 30 14:12:53.505273 kubelet[3190]: I0130 14:12:53.333010 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqlg\" (UniqueName: \"kubernetes.io/projected/3b6c2c7a-4ead-45d0-9c40-04e67f8b365b-kube-api-access-nqqlg\") pod \"calico-kube-controllers-6d47d89cdf-pw5ns\" (UID: \"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b\") " pod="calico-system/calico-kube-controllers-6d47d89cdf-pw5ns" Jan 30 14:12:53.505273 kubelet[3190]: I0130 14:12:53.333029 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b4f3c05-51a8-4482-9097-9558572b9148-calico-apiserver-certs\") pod \"calico-apiserver-598ff764fd-v4h8w\" (UID: \"6b4f3c05-51a8-4482-9097-9558572b9148\") " pod="calico-apiserver/calico-apiserver-598ff764fd-v4h8w" Jan 30 14:12:53.264360 systemd[1]: Created slice kubepods-burstable-pod7b1fe55d_d807_4720_87c5_71fc636c7ec3.slice - libcontainer container kubepods-burstable-pod7b1fe55d_d807_4720_87c5_71fc636c7ec3.slice. Jan 30 14:12:53.509238 kubelet[3190]: I0130 14:12:53.333045 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b1fe55d-d807-4720-87c5-71fc636c7ec3-config-volume\") pod \"coredns-6f6b679f8f-wzldc\" (UID: \"7b1fe55d-d807-4720-87c5-71fc636c7ec3\") " pod="kube-system/coredns-6f6b679f8f-wzldc" Jan 30 14:12:53.509238 kubelet[3190]: I0130 14:12:53.333062 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gsh\" (UniqueName: \"kubernetes.io/projected/a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e-kube-api-access-f2gsh\") pod \"calico-apiserver-598ff764fd-6p78s\" (UID: \"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e\") " pod="calico-apiserver/calico-apiserver-598ff764fd-6p78s" Jan 30 14:12:53.509238 kubelet[3190]: I0130 14:12:53.333078 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbhsq\" (UniqueName: \"kubernetes.io/projected/6b4f3c05-51a8-4482-9097-9558572b9148-kube-api-access-mbhsq\") pod \"calico-apiserver-598ff764fd-v4h8w\" (UID: \"6b4f3c05-51a8-4482-9097-9558572b9148\") " pod="calico-apiserver/calico-apiserver-598ff764fd-v4h8w" Jan 30 14:12:53.509238 kubelet[3190]: I0130 14:12:53.333135 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f59b1d6-1b79-481b-b739-8d25c393b80d-config-volume\") pod \"coredns-6f6b679f8f-t4568\" (UID: \"8f59b1d6-1b79-481b-b739-8d25c393b80d\") " pod="kube-system/coredns-6f6b679f8f-t4568" Jan 30 14:12:53.509238 kubelet[3190]: I0130 14:12:53.333164 3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clppb\" (UniqueName: \"kubernetes.io/projected/8f59b1d6-1b79-481b-b739-8d25c393b80d-kube-api-access-clppb\") pod \"coredns-6f6b679f8f-t4568\" (UID: \"8f59b1d6-1b79-481b-b739-8d25c393b80d\") " pod="kube-system/coredns-6f6b679f8f-t4568" Jan 30 14:12:53.281584 systemd[1]: Created slice kubepods-besteffort-pod3b6c2c7a_4ead_45d0_9c40_04e67f8b365b.slice - libcontainer container kubepods-besteffort-pod3b6c2c7a_4ead_45d0_9c40_04e67f8b365b.slice. Jan 30 14:12:53.288980 systemd[1]: Created slice kubepods-besteffort-poda0a55dbb_d35d_47fc_9896_5a8a6fe5ec9e.slice - libcontainer container kubepods-besteffort-poda0a55dbb_d35d_47fc_9896_5a8a6fe5ec9e.slice. Jan 30 14:12:53.294341 systemd[1]: Created slice kubepods-besteffort-pod6b4f3c05_51a8_4482_9097_9558572b9148.slice - libcontainer container kubepods-besteffort-pod6b4f3c05_51a8_4482_9097_9558572b9148.slice. Jan 30 14:12:53.807063 containerd[1721]: time="2025-01-30T14:12:53.806934453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4568,Uid:8f59b1d6-1b79-481b-b739-8d25c393b80d,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:53.809506 containerd[1721]: time="2025-01-30T14:12:53.809433174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d47d89cdf-pw5ns,Uid:3b6c2c7a-4ead-45d0-9c40-04e67f8b365b,Namespace:calico-system,Attempt:0,}" Jan 30 14:12:53.817879 containerd[1721]: time="2025-01-30T14:12:53.817825657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzldc,Uid:7b1fe55d-d807-4720-87c5-71fc636c7ec3,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:54.048301 systemd[1]: Created slice kubepods-besteffort-pod9434d060_dc38_470e_9a84_12438e405d36.slice - libcontainer container kubepods-besteffort-pod9434d060_dc38_470e_9a84_12438e405d36.slice. Jan 30 14:12:54.051112 containerd[1721]: time="2025-01-30T14:12:54.051074684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc286,Uid:9434d060-dc38-470e-9a84-12438e405d36,Namespace:calico-system,Attempt:0,}" Jan 30 14:12:54.324346 containerd[1721]: time="2025-01-30T14:12:54.324193483Z" level=info msg="shim disconnected" id=862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd namespace=k8s.io Jan 30 14:12:54.324346 containerd[1721]: time="2025-01-30T14:12:54.324342763Z" level=warning msg="cleaning up after shim disconnected" id=862a9d62fd3f5d63bb7bbcc0486419da42568bce71f4eda953c947e870d5c5fd namespace=k8s.io Jan 30 14:12:54.324538 containerd[1721]: time="2025-01-30T14:12:54.324357603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:12:54.434473 kubelet[3190]: E0130 14:12:54.434432 3190 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 14:12:54.435587 kubelet[3190]: E0130 14:12:54.434521 3190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e-calico-apiserver-certs podName:a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e nodeName:}" failed. No retries permitted until 2025-01-30 14:12:54.934498595 +0000 UTC m=+29.018626872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e-calico-apiserver-certs") pod "calico-apiserver-598ff764fd-6p78s" (UID: "a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:12:54.435587 kubelet[3190]: E0130 14:12:54.435508 3190 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 14:12:54.435587 kubelet[3190]: E0130 14:12:54.435580 3190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b4f3c05-51a8-4482-9097-9558572b9148-calico-apiserver-certs podName:6b4f3c05-51a8-4482-9097-9558572b9148 nodeName:}" failed. No retries permitted until 2025-01-30 14:12:54.935564155 +0000 UTC m=+29.019692432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6b4f3c05-51a8-4482-9097-9558572b9148-calico-apiserver-certs") pod "calico-apiserver-598ff764fd-v4h8w" (UID: "6b4f3c05-51a8-4482-9097-9558572b9148") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:12:54.538463 containerd[1721]: time="2025-01-30T14:12:54.538413585Z" level=error msg="Failed to destroy network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.539100 containerd[1721]: time="2025-01-30T14:12:54.538945585Z" level=error msg="encountered an error cleaning up failed sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.539100 containerd[1721]: time="2025-01-30T14:12:54.539001905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d47d89cdf-pw5ns,Uid:3b6c2c7a-4ead-45d0-9c40-04e67f8b365b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.539455 kubelet[3190]: E0130 14:12:54.539326 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.539513 kubelet[3190]: E0130 14:12:54.539484 3190 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d47d89cdf-pw5ns" Jan 30 14:12:54.539546 kubelet[3190]: E0130 14:12:54.539506 3190 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d47d89cdf-pw5ns" Jan 30 14:12:54.539580 kubelet[3190]: E0130 14:12:54.539553 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d47d89cdf-pw5ns_calico-system(3b6c2c7a-4ead-45d0-9c40-04e67f8b365b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d47d89cdf-pw5ns_calico-system(3b6c2c7a-4ead-45d0-9c40-04e67f8b365b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d47d89cdf-pw5ns" podUID="3b6c2c7a-4ead-45d0-9c40-04e67f8b365b" Jan 30 14:12:54.555894 containerd[1721]: time="2025-01-30T14:12:54.555371150Z" level=error msg="Failed to destroy network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.555894 containerd[1721]: time="2025-01-30T14:12:54.555705470Z" level=error msg="encountered an error cleaning up failed sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.555894 containerd[1721]: time="2025-01-30T14:12:54.555759510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4568,Uid:8f59b1d6-1b79-481b-b739-8d25c393b80d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.556734 kubelet[3190]: E0130 14:12:54.555968 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.556819 kubelet[3190]: E0130 14:12:54.556767 3190 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4568" Jan 30 14:12:54.556819 kubelet[3190]: E0130 14:12:54.556794 3190 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4568" Jan 30 14:12:54.557384 kubelet[3190]: E0130 14:12:54.557349 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t4568_kube-system(8f59b1d6-1b79-481b-b739-8d25c393b80d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t4568_kube-system(8f59b1d6-1b79-481b-b739-8d25c393b80d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t4568" podUID="8f59b1d6-1b79-481b-b739-8d25c393b80d" Jan 30 14:12:54.565131 containerd[1721]: time="2025-01-30T14:12:54.565037033Z" level=error msg="Failed to destroy network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.565915 containerd[1721]: time="2025-01-30T14:12:54.565877553Z" level=error msg="encountered an error cleaning up failed sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.566061 containerd[1721]: time="2025-01-30T14:12:54.566032033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc286,Uid:9434d060-dc38-470e-9a84-12438e405d36,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.566300 kubelet[3190]: E0130 14:12:54.566261 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.566371 kubelet[3190]: E0130 14:12:54.566321 3190 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sc286" Jan 30 14:12:54.566371 kubelet[3190]: E0130 14:12:54.566338 3190 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sc286" Jan 30 14:12:54.566429 kubelet[3190]: E0130 14:12:54.566378 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sc286_calico-system(9434d060-dc38-470e-9a84-12438e405d36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sc286_calico-system(9434d060-dc38-470e-9a84-12438e405d36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:54.575331 containerd[1721]: time="2025-01-30T14:12:54.575172596Z" level=error msg="Failed to destroy network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.575615 containerd[1721]: time="2025-01-30T14:12:54.575577396Z" level=error msg="encountered an error cleaning up failed sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.575673 containerd[1721]: time="2025-01-30T14:12:54.575645716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzldc,Uid:7b1fe55d-d807-4720-87c5-71fc636c7ec3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.576308 kubelet[3190]: E0130 14:12:54.575872 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:54.576308 kubelet[3190]: E0130 14:12:54.575933 3190 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzldc" Jan 30 14:12:54.576308 kubelet[3190]: E0130 14:12:54.575954 3190 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzldc" Jan 30 14:12:54.576472 kubelet[3190]: E0130 14:12:54.575998 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzldc_kube-system(7b1fe55d-d807-4720-87c5-71fc636c7ec3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzldc_kube-system(7b1fe55d-d807-4720-87c5-71fc636c7ec3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzldc" podUID="7b1fe55d-d807-4720-87c5-71fc636c7ec3" Jan 30 14:12:55.009217 containerd[1721]: time="2025-01-30T14:12:55.009150281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-6p78s,Uid:a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:12:55.013243 containerd[1721]: time="2025-01-30T14:12:55.013185282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-v4h8w,Uid:6b4f3c05-51a8-4482-9097-9558572b9148,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:12:55.111582 containerd[1721]: time="2025-01-30T14:12:55.111060270Z" level=error msg="Failed to destroy network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.111582 containerd[1721]: time="2025-01-30T14:12:55.111411751Z" level=error msg="encountered an error cleaning up failed sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.111582 containerd[1721]: time="2025-01-30T14:12:55.111467671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-6p78s,Uid:a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.111839 kubelet[3190]: E0130 14:12:55.111680 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.111839 kubelet[3190]: E0130 14:12:55.111737 3190 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598ff764fd-6p78s" Jan 30 14:12:55.111839 kubelet[3190]: E0130 14:12:55.111760 3190 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598ff764fd-6p78s" Jan 30 14:12:55.111932 kubelet[3190]: E0130 14:12:55.111803 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598ff764fd-6p78s_calico-apiserver(a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598ff764fd-6p78s_calico-apiserver(a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598ff764fd-6p78s" podUID="a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e" Jan 30 14:12:55.114158 containerd[1721]: time="2025-01-30T14:12:55.114120991Z" level=error msg="Failed to destroy network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.114924 containerd[1721]: time="2025-01-30T14:12:55.114569712Z" level=error msg="encountered an error cleaning up failed sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.114924 containerd[1721]: time="2025-01-30T14:12:55.114636032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-v4h8w,Uid:6b4f3c05-51a8-4482-9097-9558572b9148,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.115095 kubelet[3190]: E0130 14:12:55.114964 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.115095 kubelet[3190]: E0130 14:12:55.115014 3190 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598ff764fd-v4h8w" Jan 30 14:12:55.115518 kubelet[3190]: E0130 14:12:55.115459 3190 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598ff764fd-v4h8w" Jan 30 14:12:55.115578 kubelet[3190]: E0130 14:12:55.115548 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598ff764fd-v4h8w_calico-apiserver(6b4f3c05-51a8-4482-9097-9558572b9148)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598ff764fd-v4h8w_calico-apiserver(6b4f3c05-51a8-4482-9097-9558572b9148)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598ff764fd-v4h8w" podUID="6b4f3c05-51a8-4482-9097-9558572b9148" Jan 30 14:12:55.161104 kubelet[3190]: I0130 14:12:55.161058 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:12:55.162333 containerd[1721]: time="2025-01-30T14:12:55.162164125Z" level=info msg="StopPodSandbox for \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\"" Jan 30 14:12:55.162749 containerd[1721]: time="2025-01-30T14:12:55.162712485Z" level=info msg="Ensure that sandbox b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb in task-service has been cleanup successfully" Jan 30 14:12:55.164279 kubelet[3190]: I0130 14:12:55.164025 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:12:55.165517 containerd[1721]: time="2025-01-30T14:12:55.165423766Z" level=info msg="StopPodSandbox for \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\"" Jan 30 14:12:55.165792 containerd[1721]: time="2025-01-30T14:12:55.165593846Z" level=info msg="Ensure that sandbox 8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084 in task-service has been cleanup successfully" Jan 30 14:12:55.168081 kubelet[3190]: I0130 14:12:55.167995 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:12:55.170358 containerd[1721]: time="2025-01-30T14:12:55.170181568Z" level=info msg="StopPodSandbox for \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\"" Jan 30 14:12:55.171121 containerd[1721]: time="2025-01-30T14:12:55.170884848Z" level=info msg="Ensure that sandbox d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb in task-service has been cleanup successfully" Jan 30 14:12:55.174607 kubelet[3190]: I0130 14:12:55.174348 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:12:55.176604 containerd[1721]: time="2025-01-30T14:12:55.176566009Z" level=info msg="StopPodSandbox for \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\"" Jan 30 14:12:55.176793 containerd[1721]: time="2025-01-30T14:12:55.176746089Z" level=info msg="Ensure that sandbox 3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386 in task-service has been cleanup successfully" Jan 30 14:12:55.184154 kubelet[3190]: I0130 14:12:55.184125 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:12:55.186252 containerd[1721]: time="2025-01-30T14:12:55.186027692Z" level=info msg="StopPodSandbox for \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\"" Jan 30 14:12:55.187321 containerd[1721]: time="2025-01-30T14:12:55.187172253Z" level=info msg="Ensure that sandbox 7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf in task-service has been cleanup successfully" Jan 30 14:12:55.189761 kubelet[3190]: I0130 14:12:55.189334 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:12:55.190906 containerd[1721]: time="2025-01-30T14:12:55.190557053Z" level=info msg="StopPodSandbox for \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\"" Jan 30 14:12:55.192653 containerd[1721]: time="2025-01-30T14:12:55.192448614Z" level=info msg="Ensure that sandbox 192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a in task-service has been cleanup successfully" Jan 30 14:12:55.206944 containerd[1721]: time="2025-01-30T14:12:55.206081578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:12:55.274265 containerd[1721]: time="2025-01-30T14:12:55.273606757Z" level=error msg="StopPodSandbox for \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\" failed" error="failed to destroy network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.274659 kubelet[3190]: E0130 14:12:55.274617 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:12:55.274731 kubelet[3190]: E0130 14:12:55.274677 3190 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084"} Jan 30 14:12:55.274766 kubelet[3190]: E0130 14:12:55.274742 3190 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b4f3c05-51a8-4482-9097-9558572b9148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:12:55.274818 kubelet[3190]: E0130 14:12:55.274765 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b4f3c05-51a8-4482-9097-9558572b9148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598ff764fd-v4h8w" podUID="6b4f3c05-51a8-4482-9097-9558572b9148" Jan 30 14:12:55.283724 containerd[1721]: time="2025-01-30T14:12:55.283670040Z" level=error msg="StopPodSandbox for \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\" failed" error="failed to destroy network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.284093 kubelet[3190]: E0130 14:12:55.284047 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:12:55.284173 kubelet[3190]: E0130 14:12:55.284101 3190 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb"} Jan 30 14:12:55.284173 kubelet[3190]: E0130 14:12:55.284135 3190 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:12:55.284173 kubelet[3190]: E0130 14:12:55.284155 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598ff764fd-6p78s" podUID="a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e" Jan 30 14:12:55.284693 kubelet[3190]: E0130 14:12:55.284481 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:12:55.284693 kubelet[3190]: E0130 14:12:55.284511 3190 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386"} Jan 30 14:12:55.284693 kubelet[3190]: E0130 14:12:55.284533 3190 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:12:55.284693 kubelet[3190]: E0130 14:12:55.284564 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d47d89cdf-pw5ns" podUID="3b6c2c7a-4ead-45d0-9c40-04e67f8b365b" Jan 30 14:12:55.284912 containerd[1721]: time="2025-01-30T14:12:55.284327401Z" level=error msg="StopPodSandbox for \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\" failed" error="failed to destroy network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.286866 containerd[1721]: time="2025-01-30T14:12:55.286777281Z" level=error msg="StopPodSandbox for \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\" failed" error="failed to destroy network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.287028 kubelet[3190]: E0130 14:12:55.286958 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:12:55.287028 kubelet[3190]: E0130 14:12:55.287001 3190 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb"} Jan 30 14:12:55.287028 kubelet[3190]: E0130 14:12:55.287026 3190 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9434d060-dc38-470e-9a84-12438e405d36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:12:55.287157 kubelet[3190]: E0130 14:12:55.287044 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9434d060-dc38-470e-9a84-12438e405d36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sc286" podUID="9434d060-dc38-470e-9a84-12438e405d36" Jan 30 14:12:55.290576 containerd[1721]: time="2025-01-30T14:12:55.290265362Z" level=error msg="StopPodSandbox for \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\" failed" error="failed to destroy network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.290676 kubelet[3190]: E0130 14:12:55.290451 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:12:55.290676 kubelet[3190]: E0130 14:12:55.290491 3190 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a"} Jan 30 14:12:55.290676 kubelet[3190]: E0130 14:12:55.290522 3190 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f59b1d6-1b79-481b-b739-8d25c393b80d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:12:55.290880 kubelet[3190]: E0130 14:12:55.290541 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f59b1d6-1b79-481b-b739-8d25c393b80d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t4568" podUID="8f59b1d6-1b79-481b-b739-8d25c393b80d" Jan 30 14:12:55.297163 containerd[1721]: time="2025-01-30T14:12:55.297085044Z" level=error msg="StopPodSandbox for \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\" failed" error="failed to destroy network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:12:55.297498 kubelet[3190]: E0130 14:12:55.297458 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:12:55.297569 kubelet[3190]: E0130 14:12:55.297511 3190 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf"} Jan 30 14:12:55.297569 kubelet[3190]: E0130 14:12:55.297545 3190 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b1fe55d-d807-4720-87c5-71fc636c7ec3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:12:55.297641 kubelet[3190]: E0130 14:12:55.297565 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b1fe55d-d807-4720-87c5-71fc636c7ec3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzldc" podUID="7b1fe55d-d807-4720-87c5-71fc636c7ec3" Jan 30 14:12:55.417780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a-shm.mount: Deactivated successfully. Jan 30 14:12:55.418277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386-shm.mount: Deactivated successfully. Jan 30 14:12:59.411772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36406573.mount: Deactivated successfully. Jan 30 14:12:59.538254 containerd[1721]: time="2025-01-30T14:12:59.536843927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.542899 containerd[1721]: time="2025-01-30T14:12:59.542852372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 14:12:59.543669 containerd[1721]: time="2025-01-30T14:12:59.543636173Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.548540 containerd[1721]: time="2025-01-30T14:12:59.548488498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.549317 containerd[1721]: time="2025-01-30T14:12:59.549289338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.34316632s" Jan 30 14:12:59.549426 containerd[1721]: time="2025-01-30T14:12:59.549409939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 14:12:59.561385 containerd[1721]: time="2025-01-30T14:12:59.561334950Z" level=info msg="CreateContainer within sandbox \"2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:12:59.610109 containerd[1721]: time="2025-01-30T14:12:59.610055755Z" level=info msg="CreateContainer within sandbox \"2fb6748a00985f3772ee72c25fab3939d8777bbbd6f973870c2af55749a5f8a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b2d25bb709e26ede012ff553f9834ce0446f1217a656eb70df5d1248f2c3a8bc\"" Jan 30 14:12:59.611410 containerd[1721]: time="2025-01-30T14:12:59.611343116Z" level=info msg="StartContainer for \"b2d25bb709e26ede012ff553f9834ce0446f1217a656eb70df5d1248f2c3a8bc\"" Jan 30 14:12:59.641430 systemd[1]: Started cri-containerd-b2d25bb709e26ede012ff553f9834ce0446f1217a656eb70df5d1248f2c3a8bc.scope - libcontainer container b2d25bb709e26ede012ff553f9834ce0446f1217a656eb70df5d1248f2c3a8bc. Jan 30 14:12:59.671853 containerd[1721]: time="2025-01-30T14:12:59.671810773Z" level=info msg="StartContainer for \"b2d25bb709e26ede012ff553f9834ce0446f1217a656eb70df5d1248f2c3a8bc\" returns successfully" Jan 30 14:12:59.895485 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:12:59.895630 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:13:00.249751 kubelet[3190]: I0130 14:13:00.248690 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kqpqj" podStartSLOduration=2.68200737 podStartE2EDuration="18.24867391s" podCreationTimestamp="2025-01-30 14:12:42 +0000 UTC" firstStartedPulling="2025-01-30 14:12:43.983429079 +0000 UTC m=+18.067557356" lastFinishedPulling="2025-01-30 14:12:59.550095619 +0000 UTC m=+33.634223896" observedRunningTime="2025-01-30 14:13:00.24832415 +0000 UTC m=+34.332452427" watchObservedRunningTime="2025-01-30 14:13:00.24867391 +0000 UTC m=+34.332802187" Jan 30 14:13:06.882131 kubelet[3190]: I0130 14:13:06.882037 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:07.042568 containerd[1721]: time="2025-01-30T14:13:07.042470626Z" level=info msg="StopPodSandbox for \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\"" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.101 [INFO][4676] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.101 [INFO][4676] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" iface="eth0" netns="/var/run/netns/cni-9329e768-3ba7-cd31-d80f-fcc7d6002ed4" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.102 [INFO][4676] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" iface="eth0" netns="/var/run/netns/cni-9329e768-3ba7-cd31-d80f-fcc7d6002ed4" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.102 [INFO][4676] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" iface="eth0" netns="/var/run/netns/cni-9329e768-3ba7-cd31-d80f-fcc7d6002ed4" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.103 [INFO][4676] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.103 [INFO][4676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.123 [INFO][4682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.124 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.124 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.135 [WARNING][4682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.135 [INFO][4682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.136 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:07.140508 containerd[1721]: 2025-01-30 14:13:07.139 [INFO][4676] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:07.141012 containerd[1721]: time="2025-01-30T14:13:07.140950717Z" level=info msg="TearDown network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\" successfully" Jan 30 14:13:07.141012 containerd[1721]: time="2025-01-30T14:13:07.140990118Z" level=info msg="StopPodSandbox for \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\" returns successfully" Jan 30 14:13:07.142741 containerd[1721]: time="2025-01-30T14:13:07.142389969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-v4h8w,Uid:6b4f3c05-51a8-4482-9097-9558572b9148,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:13:07.144505 systemd[1]: run-netns-cni\x2d9329e768\x2d3ba7\x2dcd31\x2dd80f\x2dfcc7d6002ed4.mount: Deactivated successfully. Jan 30 14:13:07.404836 systemd-networkd[1621]: cali2c8273b7948: Link UP Jan 30 14:13:07.405316 systemd-networkd[1621]: cali2c8273b7948: Gained carrier Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.236 [INFO][4705] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.264 [INFO][4705] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0 calico-apiserver-598ff764fd- calico-apiserver 6b4f3c05-51a8-4482-9097-9558572b9148 775 0 2025-01-30 14:12:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598ff764fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-1247579205 calico-apiserver-598ff764fd-v4h8w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c8273b7948 [] []}} ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.264 [INFO][4705] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.321 [INFO][4719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" HandleID="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.336 [INFO][4719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" HandleID="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003356f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-1247579205", "pod":"calico-apiserver-598ff764fd-v4h8w", "timestamp":"2025-01-30 14:13:07.321312729 +0000 UTC"}, Hostname:"ci-4081.3.0-a-1247579205", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.336 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.336 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.336 [INFO][4719] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-1247579205' Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.339 [INFO][4719] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.343 [INFO][4719] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.349 [INFO][4719] ipam/ipam.go 489: Trying affinity for 192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.351 [INFO][4719] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.354 [INFO][4719] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.354 [INFO][4719] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.357 [INFO][4719] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805 Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.363 [INFO][4719] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.375 [INFO][4719] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.65/26] block=192.168.121.64/26 handle="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.375 [INFO][4719] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.65/26] handle="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.375 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:07.427122 containerd[1721]: 2025-01-30 14:13:07.375 [INFO][4719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.65/26] IPv6=[] ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" HandleID="k8s-pod-network.7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.427692 containerd[1721]: 2025-01-30 14:13:07.377 [INFO][4705] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b4f3c05-51a8-4482-9097-9558572b9148", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"", Pod:"calico-apiserver-598ff764fd-v4h8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8273b7948", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:07.427692 containerd[1721]: 2025-01-30 14:13:07.378 [INFO][4705] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.65/32] ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.427692 containerd[1721]: 2025-01-30 14:13:07.378 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c8273b7948 ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.427692 containerd[1721]: 2025-01-30 14:13:07.406 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.427692 containerd[1721]: 2025-01-30 14:13:07.406 [INFO][4705] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b4f3c05-51a8-4482-9097-9558572b9148", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805", Pod:"calico-apiserver-598ff764fd-v4h8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8273b7948", MAC:"4e:e7:e0:d6:ec:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:07.427692 containerd[1721]: 2025-01-30 14:13:07.423 [INFO][4705] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-v4h8w" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:07.454681 containerd[1721]: time="2025-01-30T14:13:07.454453372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:07.454681 containerd[1721]: time="2025-01-30T14:13:07.454523292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:07.454681 containerd[1721]: time="2025-01-30T14:13:07.454538812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:07.454681 containerd[1721]: time="2025-01-30T14:13:07.454632333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:07.479490 systemd[1]: Started cri-containerd-7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805.scope - libcontainer container 7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805. Jan 30 14:13:07.511273 containerd[1721]: time="2025-01-30T14:13:07.509543323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-v4h8w,Uid:6b4f3c05-51a8-4482-9097-9558572b9148,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805\"" Jan 30 14:13:07.515397 containerd[1721]: time="2025-01-30T14:13:07.514886445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:13:07.628255 kernel: bpftool[4814]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:13:07.848122 systemd-networkd[1621]: vxlan.calico: Link UP Jan 30 14:13:07.848135 systemd-networkd[1621]: vxlan.calico: Gained carrier Jan 30 14:13:08.043842 containerd[1721]: time="2025-01-30T14:13:08.043385542Z" level=info msg="StopPodSandbox for \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\"" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.095 [INFO][4874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.095 [INFO][4874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" iface="eth0" netns="/var/run/netns/cni-8cd6a39d-3ff0-d82e-9725-91fa0966a292" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.095 [INFO][4874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" iface="eth0" netns="/var/run/netns/cni-8cd6a39d-3ff0-d82e-9725-91fa0966a292" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.095 [INFO][4874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" iface="eth0" netns="/var/run/netns/cni-8cd6a39d-3ff0-d82e-9725-91fa0966a292" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.095 [INFO][4874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.096 [INFO][4874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.119 [INFO][4882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.119 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.119 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.131 [WARNING][4882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.131 [INFO][4882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.133 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:08.138877 containerd[1721]: 2025-01-30 14:13:08.137 [INFO][4874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:08.139903 containerd[1721]: time="2025-01-30T14:13:08.138945171Z" level=info msg="TearDown network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\" successfully" Jan 30 14:13:08.139903 containerd[1721]: time="2025-01-30T14:13:08.138971451Z" level=info msg="StopPodSandbox for \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\" returns successfully" Jan 30 14:13:08.140578 containerd[1721]: time="2025-01-30T14:13:08.140535023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzldc,Uid:7b1fe55d-d807-4720-87c5-71fc636c7ec3,Namespace:kube-system,Attempt:1,}" Jan 30 14:13:08.145456 systemd[1]: run-netns-cni\x2d8cd6a39d\x2d3ff0\x2dd82e\x2d9725\x2d91fa0966a292.mount: Deactivated successfully. Jan 30 14:13:08.332488 systemd-networkd[1621]: cali70107ec59bc: Link UP Jan 30 14:13:08.333139 systemd-networkd[1621]: cali70107ec59bc: Gained carrier Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.256 [INFO][4909] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0 coredns-6f6b679f8f- kube-system 7b1fe55d-d807-4720-87c5-71fc636c7ec3 784 0 2025-01-30 14:12:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-1247579205 coredns-6f6b679f8f-wzldc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70107ec59bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.256 [INFO][4909] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.282 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" HandleID="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.294 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" HandleID="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa1f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-1247579205", "pod":"coredns-6f6b679f8f-wzldc", "timestamp":"2025-01-30 14:13:08.282897938 +0000 UTC"}, Hostname:"ci-4081.3.0-a-1247579205", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.294 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.294 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.294 [INFO][4919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-1247579205' Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.296 [INFO][4919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.300 [INFO][4919] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.306 [INFO][4919] ipam/ipam.go 489: Trying affinity for 192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.308 [INFO][4919] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.311 [INFO][4919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.311 [INFO][4919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.313 [INFO][4919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7 Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.318 [INFO][4919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.326 [INFO][4919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.66/26] block=192.168.121.64/26 handle="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.326 [INFO][4919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.66/26] handle="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.326 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:08.349151 containerd[1721]: 2025-01-30 14:13:08.327 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.66/26] IPv6=[] ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" HandleID="k8s-pod-network.bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.350514 containerd[1721]: 2025-01-30 14:13:08.329 [INFO][4909] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7b1fe55d-d807-4720-87c5-71fc636c7ec3", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"", Pod:"coredns-6f6b679f8f-wzldc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70107ec59bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:08.350514 containerd[1721]: 2025-01-30 14:13:08.329 [INFO][4909] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.66/32] ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.350514 containerd[1721]: 2025-01-30 14:13:08.329 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70107ec59bc ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.350514 containerd[1721]: 2025-01-30 14:13:08.332 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.350514 containerd[1721]: 2025-01-30 14:13:08.333 [INFO][4909] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7b1fe55d-d807-4720-87c5-71fc636c7ec3", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7", Pod:"coredns-6f6b679f8f-wzldc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70107ec59bc", MAC:"06:e4:7d:aa:83:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:08.350514 containerd[1721]: 2025-01-30 14:13:08.345 [INFO][4909] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzldc" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:08.375891 containerd[1721]: time="2025-01-30T14:13:08.375741904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:08.375891 containerd[1721]: time="2025-01-30T14:13:08.375827825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:08.375891 containerd[1721]: time="2025-01-30T14:13:08.375843185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:08.376381 containerd[1721]: time="2025-01-30T14:13:08.375956666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:08.397412 systemd[1]: Started cri-containerd-bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7.scope - libcontainer container bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7. Jan 30 14:13:08.431337 containerd[1721]: time="2025-01-30T14:13:08.431169778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzldc,Uid:7b1fe55d-d807-4720-87c5-71fc636c7ec3,Namespace:kube-system,Attempt:1,} returns sandbox id \"bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7\"" Jan 30 14:13:08.434625 containerd[1721]: time="2025-01-30T14:13:08.434581365Z" level=info msg="CreateContainer within sandbox \"bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:13:08.507890 containerd[1721]: time="2025-01-30T14:13:08.507844499Z" level=info msg="CreateContainer within sandbox \"bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fe31d2e3ce897e5239b3572c4305dfe2bd481c9b3d02dfb0ddbe1ff84b32319\"" Jan 30 14:13:08.509205 containerd[1721]: time="2025-01-30T14:13:08.509059268Z" level=info msg="StartContainer for \"3fe31d2e3ce897e5239b3572c4305dfe2bd481c9b3d02dfb0ddbe1ff84b32319\"" Jan 30 14:13:08.538414 systemd[1]: Started cri-containerd-3fe31d2e3ce897e5239b3572c4305dfe2bd481c9b3d02dfb0ddbe1ff84b32319.scope - libcontainer container 3fe31d2e3ce897e5239b3572c4305dfe2bd481c9b3d02dfb0ddbe1ff84b32319. Jan 30 14:13:08.571497 containerd[1721]: time="2025-01-30T14:13:08.571382156Z" level=info msg="StartContainer for \"3fe31d2e3ce897e5239b3572c4305dfe2bd481c9b3d02dfb0ddbe1ff84b32319\" returns successfully" Jan 30 14:13:09.042728 containerd[1721]: time="2025-01-30T14:13:09.042369283Z" level=info msg="StopPodSandbox for \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\"" Jan 30 14:13:09.043433 containerd[1721]: time="2025-01-30T14:13:09.042369203Z" level=info msg="StopPodSandbox for \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\"" Jan 30 14:13:09.052550 systemd-networkd[1621]: cali2c8273b7948: Gained IPv6LL Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.130 [INFO][5042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.130 [INFO][5042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" iface="eth0" netns="/var/run/netns/cni-9e8b259d-d284-b480-c59f-729e41ffeeee" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.131 [INFO][5042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" iface="eth0" netns="/var/run/netns/cni-9e8b259d-d284-b480-c59f-729e41ffeeee" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.133 [INFO][5042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" iface="eth0" netns="/var/run/netns/cni-9e8b259d-d284-b480-c59f-729e41ffeeee" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.133 [INFO][5042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.133 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.196 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.196 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.196 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.212 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.212 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.215 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:09.218099 containerd[1721]: 2025-01-30 14:13:09.216 [INFO][5042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:09.220791 containerd[1721]: time="2025-01-30T14:13:09.218259700Z" level=info msg="TearDown network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\" successfully" Jan 30 14:13:09.220791 containerd[1721]: time="2025-01-30T14:13:09.218285461Z" level=info msg="StopPodSandbox for \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\" returns successfully" Jan 30 14:13:09.220791 containerd[1721]: time="2025-01-30T14:13:09.219355229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4568,Uid:8f59b1d6-1b79-481b-b739-8d25c393b80d,Namespace:kube-system,Attempt:1,}" Jan 30 14:13:09.221746 systemd[1]: run-netns-cni\x2d9e8b259d\x2dd284\x2db480\x2dc59f\x2d729e41ffeeee.mount: Deactivated successfully. Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.181 [INFO][5046] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.182 [INFO][5046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" iface="eth0" netns="/var/run/netns/cni-53d5e800-f5bc-6e93-d8bc-54b053601bae" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.182 [INFO][5046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" iface="eth0" netns="/var/run/netns/cni-53d5e800-f5bc-6e93-d8bc-54b053601bae" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.182 [INFO][5046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" iface="eth0" netns="/var/run/netns/cni-53d5e800-f5bc-6e93-d8bc-54b053601bae" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.182 [INFO][5046] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.183 [INFO][5046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.221 [INFO][5059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.221 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.221 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.232 [WARNING][5059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.232 [INFO][5059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.235 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:09.240781 containerd[1721]: 2025-01-30 14:13:09.238 [INFO][5046] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:09.241868 containerd[1721]: time="2025-01-30T14:13:09.241825845Z" level=info msg="TearDown network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\" successfully" Jan 30 14:13:09.241868 containerd[1721]: time="2025-01-30T14:13:09.241864925Z" level=info msg="StopPodSandbox for \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\" returns successfully" Jan 30 14:13:09.244850 systemd[1]: run-netns-cni\x2d53d5e800\x2df5bc\x2d6e93\x2dd8bc\x2d54b053601bae.mount: Deactivated successfully. Jan 30 14:13:09.246905 containerd[1721]: time="2025-01-30T14:13:09.246862684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d47d89cdf-pw5ns,Uid:3b6c2c7a-4ead-45d0-9c40-04e67f8b365b,Namespace:calico-system,Attempt:1,}" Jan 30 14:13:09.311516 kubelet[3190]: I0130 14:13:09.311386 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wzldc" podStartSLOduration=36.311366709 podStartE2EDuration="36.311366709s" podCreationTimestamp="2025-01-30 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:09.285577027 +0000 UTC m=+43.369705344" watchObservedRunningTime="2025-01-30 14:13:09.311366709 +0000 UTC m=+43.395494986" Jan 30 14:13:09.436532 systemd-networkd[1621]: vxlan.calico: Gained IPv6LL Jan 30 14:13:09.469184 systemd-networkd[1621]: cali178ca8f796a: Link UP Jan 30 14:13:09.470079 systemd-networkd[1621]: cali178ca8f796a: Gained carrier Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.351 [INFO][5068] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0 coredns-6f6b679f8f- kube-system 8f59b1d6-1b79-481b-b739-8d25c393b80d 793 0 2025-01-30 14:12:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-1247579205 coredns-6f6b679f8f-t4568 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali178ca8f796a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.352 [INFO][5068] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.398 [INFO][5093] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" HandleID="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.414 [INFO][5093] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" HandleID="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bb350), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-1247579205", "pod":"coredns-6f6b679f8f-t4568", "timestamp":"2025-01-30 14:13:09.397993548 +0000 UTC"}, Hostname:"ci-4081.3.0-a-1247579205", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.415 [INFO][5093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.415 [INFO][5093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.415 [INFO][5093] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-1247579205' Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.425 [INFO][5093] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.433 [INFO][5093] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.438 [INFO][5093] ipam/ipam.go 489: Trying affinity for 192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.440 [INFO][5093] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.443 [INFO][5093] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.443 [INFO][5093] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.444 [INFO][5093] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.453 [INFO][5093] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.462 [INFO][5093] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.67/26] block=192.168.121.64/26 handle="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.463 [INFO][5093] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.67/26] handle="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.463 [INFO][5093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:09.487124 containerd[1721]: 2025-01-30 14:13:09.464 [INFO][5093] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.67/26] IPv6=[] ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" HandleID="k8s-pod-network.362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.488339 containerd[1721]: 2025-01-30 14:13:09.465 [INFO][5068] cni-plugin/k8s.go 386: Populated endpoint ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8f59b1d6-1b79-481b-b739-8d25c393b80d", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"", Pod:"coredns-6f6b679f8f-t4568", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali178ca8f796a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:09.488339 containerd[1721]: 2025-01-30 14:13:09.466 [INFO][5068] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.67/32] ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.488339 containerd[1721]: 2025-01-30 14:13:09.466 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali178ca8f796a ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.488339 containerd[1721]: 2025-01-30 14:13:09.468 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.488339 containerd[1721]: 2025-01-30 14:13:09.469 [INFO][5068] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8f59b1d6-1b79-481b-b739-8d25c393b80d", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a", Pod:"coredns-6f6b679f8f-t4568", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali178ca8f796a", MAC:"0a:81:56:33:39:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:09.488339 containerd[1721]: 2025-01-30 14:13:09.484 [INFO][5068] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4568" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:09.513587 containerd[1721]: time="2025-01-30T14:13:09.513489732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:09.513794 containerd[1721]: time="2025-01-30T14:13:09.513665773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:09.513794 containerd[1721]: time="2025-01-30T14:13:09.513679373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.513950 containerd[1721]: time="2025-01-30T14:13:09.513874295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.536738 systemd[1]: Started cri-containerd-362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a.scope - libcontainer container 362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a. Jan 30 14:13:09.572052 systemd-networkd[1621]: cali70ee31bcaca: Link UP Jan 30 14:13:09.572932 systemd-networkd[1621]: cali70ee31bcaca: Gained carrier Jan 30 14:13:09.597634 containerd[1721]: time="2025-01-30T14:13:09.597368574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4568,Uid:8f59b1d6-1b79-481b-b739-8d25c393b80d,Namespace:kube-system,Attempt:1,} returns sandbox id \"362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a\"" Jan 30 14:13:09.604176 containerd[1721]: time="2025-01-30T14:13:09.604014696Z" level=info msg="CreateContainer within sandbox \"362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.381 [INFO][5086] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0 calico-kube-controllers-6d47d89cdf- calico-system 3b6c2c7a-4ead-45d0-9c40-04e67f8b365b 794 0 2025-01-30 14:12:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d47d89cdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-1247579205 calico-kube-controllers-6d47d89cdf-pw5ns eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali70ee31bcaca [] []}} ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.381 [INFO][5086] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.418 [INFO][5100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" HandleID="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.433 [INFO][5100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" HandleID="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-1247579205", "pod":"calico-kube-controllers-6d47d89cdf-pw5ns", "timestamp":"2025-01-30 14:13:09.418465068 +0000 UTC"}, Hostname:"ci-4081.3.0-a-1247579205", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.433 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.463 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.463 [INFO][5100] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-1247579205' Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.518 [INFO][5100] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.523 [INFO][5100] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.539 [INFO][5100] ipam/ipam.go 489: Trying affinity for 192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.543 [INFO][5100] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.545 [INFO][5100] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.545 [INFO][5100] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.546 [INFO][5100] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.553 [INFO][5100] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.563 [INFO][5100] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.68/26] block=192.168.121.64/26 handle="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.564 [INFO][5100] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.68/26] handle="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.564 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:09.605242 containerd[1721]: 2025-01-30 14:13:09.564 [INFO][5100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.68/26] IPv6=[] ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" HandleID="k8s-pod-network.7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.607011 containerd[1721]: 2025-01-30 14:13:09.567 [INFO][5086] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0", GenerateName:"calico-kube-controllers-6d47d89cdf-", Namespace:"calico-system", SelfLink:"", UID:"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d47d89cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"", Pod:"calico-kube-controllers-6d47d89cdf-pw5ns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70ee31bcaca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:09.607011 containerd[1721]: 2025-01-30 14:13:09.567 [INFO][5086] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.68/32] ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.607011 containerd[1721]: 2025-01-30 14:13:09.568 [INFO][5086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70ee31bcaca ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.607011 containerd[1721]: 2025-01-30 14:13:09.572 [INFO][5086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.607011 containerd[1721]: 2025-01-30 14:13:09.573 [INFO][5086] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0", GenerateName:"calico-kube-controllers-6d47d89cdf-", Namespace:"calico-system", SelfLink:"", UID:"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d47d89cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc", Pod:"calico-kube-controllers-6d47d89cdf-pw5ns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70ee31bcaca", MAC:"2a:64:a8:f1:7a:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:09.607011 containerd[1721]: 2025-01-30 14:13:09.597 [INFO][5086] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc" Namespace="calico-system" Pod="calico-kube-controllers-6d47d89cdf-pw5ns" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:09.638576 containerd[1721]: time="2025-01-30T14:13:09.638088349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:09.638576 containerd[1721]: time="2025-01-30T14:13:09.638156389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:09.638576 containerd[1721]: time="2025-01-30T14:13:09.638170789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.639290 containerd[1721]: time="2025-01-30T14:13:09.638364309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.655484 systemd[1]: Started cri-containerd-7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc.scope - libcontainer container 7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc. Jan 30 14:13:09.666071 containerd[1721]: time="2025-01-30T14:13:09.665927680Z" level=info msg="CreateContainer within sandbox \"362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a080d442c159839596583b3036de763d5043fc61ccf374b31acf17ab89d7bdd2\"" Jan 30 14:13:09.666798 containerd[1721]: time="2025-01-30T14:13:09.666764720Z" level=info msg="StartContainer for \"a080d442c159839596583b3036de763d5043fc61ccf374b31acf17ab89d7bdd2\"" Jan 30 14:13:09.699042 containerd[1721]: time="2025-01-30T14:13:09.698827412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d47d89cdf-pw5ns,Uid:3b6c2c7a-4ead-45d0-9c40-04e67f8b365b,Namespace:calico-system,Attempt:1,} returns sandbox id \"7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc\"" Jan 30 14:13:09.701437 systemd[1]: Started cri-containerd-a080d442c159839596583b3036de763d5043fc61ccf374b31acf17ab89d7bdd2.scope - libcontainer container a080d442c159839596583b3036de763d5043fc61ccf374b31acf17ab89d7bdd2. Jan 30 14:13:09.737004 containerd[1721]: time="2025-01-30T14:13:09.736950867Z" level=info msg="StartContainer for \"a080d442c159839596583b3036de763d5043fc61ccf374b31acf17ab89d7bdd2\" returns successfully" Jan 30 14:13:09.756373 systemd-networkd[1621]: cali70107ec59bc: Gained IPv6LL Jan 30 14:13:10.044263 containerd[1721]: time="2025-01-30T14:13:10.043424824Z" level=info msg="StopPodSandbox for \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\"" Jan 30 14:13:10.045474 containerd[1721]: time="2025-01-30T14:13:10.045293585Z" level=info msg="StopPodSandbox for \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\"" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.106 [INFO][5279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.107 [INFO][5279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" iface="eth0" netns="/var/run/netns/cni-60fee8fe-daa4-b3d4-a212-cad2b0d48f78" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.107 [INFO][5279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" iface="eth0" netns="/var/run/netns/cni-60fee8fe-daa4-b3d4-a212-cad2b0d48f78" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.107 [INFO][5279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" iface="eth0" netns="/var/run/netns/cni-60fee8fe-daa4-b3d4-a212-cad2b0d48f78" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.107 [INFO][5279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.107 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.139 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.139 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.139 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.166 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.166 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.168 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:10.171916 containerd[1721]: 2025-01-30 14:13:10.169 [INFO][5279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:10.172831 containerd[1721]: time="2025-01-30T14:13:10.172679713Z" level=info msg="TearDown network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\" successfully" Jan 30 14:13:10.172831 containerd[1721]: time="2025-01-30T14:13:10.172723073Z" level=info msg="StopPodSandbox for \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\" returns successfully" Jan 30 14:13:10.176780 containerd[1721]: time="2025-01-30T14:13:10.176661195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-6p78s,Uid:a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:13:10.177975 systemd[1]: run-netns-cni\x2d60fee8fe\x2ddaa4\x2db3d4\x2da212\x2dcad2b0d48f78.mount: Deactivated successfully. Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.121 [INFO][5280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.121 [INFO][5280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" iface="eth0" netns="/var/run/netns/cni-38eded36-bd43-d7c5-8291-6c002d3cb5f4" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.122 [INFO][5280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" iface="eth0" netns="/var/run/netns/cni-38eded36-bd43-d7c5-8291-6c002d3cb5f4" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.125 [INFO][5280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" iface="eth0" netns="/var/run/netns/cni-38eded36-bd43-d7c5-8291-6c002d3cb5f4" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.125 [INFO][5280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.125 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.170 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.170 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.170 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.182 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.182 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.184 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:10.187670 containerd[1721]: 2025-01-30 14:13:10.186 [INFO][5280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:10.188679 containerd[1721]: time="2025-01-30T14:13:10.188140839Z" level=info msg="TearDown network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\" successfully" Jan 30 14:13:10.188679 containerd[1721]: time="2025-01-30T14:13:10.188175919Z" level=info msg="StopPodSandbox for \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\" returns successfully" Jan 30 14:13:10.190867 containerd[1721]: time="2025-01-30T14:13:10.190823920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc286,Uid:9434d060-dc38-470e-9a84-12438e405d36,Namespace:calico-system,Attempt:1,}" Jan 30 14:13:10.191462 systemd[1]: run-netns-cni\x2d38eded36\x2dbd43\x2dd7c5\x2d8291\x2d6c002d3cb5f4.mount: Deactivated successfully. Jan 30 14:13:10.292033 kubelet[3190]: I0130 14:13:10.291966 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t4568" podStartSLOduration=37.291946239 podStartE2EDuration="37.291946239s" podCreationTimestamp="2025-01-30 14:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:10.288924157 +0000 UTC m=+44.373052434" watchObservedRunningTime="2025-01-30 14:13:10.291946239 +0000 UTC m=+44.376074516" Jan 30 14:13:10.425538 systemd-networkd[1621]: cali65214a4fc9c: Link UP Jan 30 14:13:10.425939 systemd-networkd[1621]: cali65214a4fc9c: Gained carrier Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.293 [INFO][5304] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0 calico-apiserver-598ff764fd- calico-apiserver a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e 818 0 2025-01-30 14:12:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598ff764fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-1247579205 calico-apiserver-598ff764fd-6p78s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65214a4fc9c [] []}} ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.296 [INFO][5304] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.360 [INFO][5330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" HandleID="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.377 [INFO][5330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" HandleID="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317410), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-1247579205", "pod":"calico-apiserver-598ff764fd-6p78s", "timestamp":"2025-01-30 14:13:10.360635185 +0000 UTC"}, Hostname:"ci-4081.3.0-a-1247579205", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.377 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.377 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.377 [INFO][5330] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-1247579205' Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.379 [INFO][5330] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.386 [INFO][5330] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.391 [INFO][5330] ipam/ipam.go 489: Trying affinity for 192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.393 [INFO][5330] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.395 [INFO][5330] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.395 [INFO][5330] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.397 [INFO][5330] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.402 [INFO][5330] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.417 [INFO][5330] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.69/26] block=192.168.121.64/26 handle="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.417 [INFO][5330] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.69/26] handle="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.417 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:10.445261 containerd[1721]: 2025-01-30 14:13:10.417 [INFO][5330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.69/26] IPv6=[] ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" HandleID="k8s-pod-network.1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.446048 containerd[1721]: 2025-01-30 14:13:10.420 [INFO][5304] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"", Pod:"calico-apiserver-598ff764fd-6p78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65214a4fc9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:10.446048 containerd[1721]: 2025-01-30 14:13:10.420 [INFO][5304] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.69/32] ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.446048 containerd[1721]: 2025-01-30 14:13:10.420 [INFO][5304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65214a4fc9c ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.446048 containerd[1721]: 2025-01-30 14:13:10.426 [INFO][5304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.446048 containerd[1721]: 2025-01-30 14:13:10.427 [INFO][5304] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce", Pod:"calico-apiserver-598ff764fd-6p78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65214a4fc9c", MAC:"66:b4:00:01:79:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:10.446048 containerd[1721]: 2025-01-30 14:13:10.441 [INFO][5304] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce" Namespace="calico-apiserver" Pod="calico-apiserver-598ff764fd-6p78s" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:10.471143 containerd[1721]: time="2025-01-30T14:13:10.470517387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:10.471143 containerd[1721]: time="2025-01-30T14:13:10.470576107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:10.471143 containerd[1721]: time="2025-01-30T14:13:10.470597027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:10.471143 containerd[1721]: time="2025-01-30T14:13:10.470677307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:10.491622 systemd[1]: Started cri-containerd-1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce.scope - libcontainer container 1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce. Jan 30 14:13:10.532390 containerd[1721]: time="2025-01-30T14:13:10.532327210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598ff764fd-6p78s,Uid:a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce\"" Jan 30 14:13:10.548321 systemd-networkd[1621]: caliae921bfec70: Link UP Jan 30 14:13:10.549135 systemd-networkd[1621]: caliae921bfec70: Gained carrier Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.324 [INFO][5308] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0 csi-node-driver- calico-system 9434d060-dc38-470e-9a84-12438e405d36 819 0 2025-01-30 14:12:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-1247579205 csi-node-driver-sc286 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliae921bfec70 [] []}} ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.324 [INFO][5308] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.374 [INFO][5334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" HandleID="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.389 [INFO][5334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" HandleID="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-1247579205", "pod":"csi-node-driver-sc286", "timestamp":"2025-01-30 14:13:10.37443063 +0000 UTC"}, Hostname:"ci-4081.3.0-a-1247579205", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.389 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.418 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.418 [INFO][5334] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-1247579205' Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.482 [INFO][5334] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.489 [INFO][5334] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.495 [INFO][5334] ipam/ipam.go 489: Trying affinity for 192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.497 [INFO][5334] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.500 [INFO][5334] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.501 [INFO][5334] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.502 [INFO][5334] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0 Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.514 [INFO][5334] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.543 [INFO][5334] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.70/26] block=192.168.121.64/26 handle="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.543 [INFO][5334] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.70/26] handle="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" host="ci-4081.3.0-a-1247579205" Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.543 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:10.568656 containerd[1721]: 2025-01-30 14:13:10.543 [INFO][5334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.70/26] IPv6=[] ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" HandleID="k8s-pod-network.4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.569243 containerd[1721]: 2025-01-30 14:13:10.545 [INFO][5308] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9434d060-dc38-470e-9a84-12438e405d36", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"", Pod:"csi-node-driver-sc286", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae921bfec70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:10.569243 containerd[1721]: 2025-01-30 14:13:10.545 [INFO][5308] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.70/32] ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.569243 containerd[1721]: 2025-01-30 14:13:10.545 [INFO][5308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae921bfec70 ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.569243 containerd[1721]: 2025-01-30 14:13:10.548 [INFO][5308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.569243 containerd[1721]: 2025-01-30 14:13:10.549 [INFO][5308] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9434d060-dc38-470e-9a84-12438e405d36", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0", Pod:"csi-node-driver-sc286", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae921bfec70", MAC:"ae:03:d7:a9:89:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:10.569243 containerd[1721]: 2025-01-30 14:13:10.565 [INFO][5308] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0" Namespace="calico-system" Pod="csi-node-driver-sc286" WorkloadEndpoint="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:10.593697 containerd[1721]: time="2025-01-30T14:13:10.593502634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:10.593697 containerd[1721]: time="2025-01-30T14:13:10.593673194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:10.594397 containerd[1721]: time="2025-01-30T14:13:10.594147554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:10.594397 containerd[1721]: time="2025-01-30T14:13:10.594315634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:10.612433 systemd[1]: Started cri-containerd-4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0.scope - libcontainer container 4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0. Jan 30 14:13:10.641899 containerd[1721]: time="2025-01-30T14:13:10.641847972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc286,Uid:9434d060-dc38-470e-9a84-12438e405d36,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0\"" Jan 30 14:13:10.652569 systemd-networkd[1621]: cali70ee31bcaca: Gained IPv6LL Jan 30 14:13:10.653070 systemd-networkd[1621]: cali178ca8f796a: Gained IPv6LL Jan 30 14:13:11.740428 systemd-networkd[1621]: caliae921bfec70: Gained IPv6LL Jan 30 14:13:11.868400 systemd-networkd[1621]: cali65214a4fc9c: Gained IPv6LL Jan 30 14:13:13.025155 containerd[1721]: time="2025-01-30T14:13:13.025089881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:13.027367 containerd[1721]: time="2025-01-30T14:13:13.027328282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 14:13:13.031780 containerd[1721]: time="2025-01-30T14:13:13.031739843Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:13.037396 containerd[1721]: time="2025-01-30T14:13:13.037061005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:13.038054 containerd[1721]: time="2025-01-30T14:13:13.037635685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 5.5227072s" Jan 30 14:13:13.038054 containerd[1721]: time="2025-01-30T14:13:13.037669845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:13:13.040548 containerd[1721]: time="2025-01-30T14:13:13.040493967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:13:13.049247 containerd[1721]: time="2025-01-30T14:13:13.049189170Z" level=info msg="CreateContainer within sandbox \"7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:13:13.085768 containerd[1721]: time="2025-01-30T14:13:13.085720504Z" level=info msg="CreateContainer within sandbox \"7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc8d4fc83b91f833d92e1db28b50816084d3eab81d38dbcc8d45ba01621932ce\"" Jan 30 14:13:13.086537 containerd[1721]: time="2025-01-30T14:13:13.086501224Z" level=info msg="StartContainer for \"dc8d4fc83b91f833d92e1db28b50816084d3eab81d38dbcc8d45ba01621932ce\"" Jan 30 14:13:13.117466 systemd[1]: Started cri-containerd-dc8d4fc83b91f833d92e1db28b50816084d3eab81d38dbcc8d45ba01621932ce.scope - libcontainer container dc8d4fc83b91f833d92e1db28b50816084d3eab81d38dbcc8d45ba01621932ce. Jan 30 14:13:13.150872 containerd[1721]: time="2025-01-30T14:13:13.150822169Z" level=info msg="StartContainer for \"dc8d4fc83b91f833d92e1db28b50816084d3eab81d38dbcc8d45ba01621932ce\" returns successfully" Jan 30 14:13:14.282176 kubelet[3190]: I0130 14:13:14.282141 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:14.703444 kubelet[3190]: I0130 14:13:14.703375 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598ff764fd-v4h8w" podStartSLOduration=28.177573285 podStartE2EDuration="33.703344648s" podCreationTimestamp="2025-01-30 14:12:41 +0000 UTC" firstStartedPulling="2025-01-30 14:13:07.514605363 +0000 UTC m=+41.598733640" lastFinishedPulling="2025-01-30 14:13:13.040376726 +0000 UTC m=+47.124505003" observedRunningTime="2025-01-30 14:13:13.31177403 +0000 UTC m=+47.395902347" watchObservedRunningTime="2025-01-30 14:13:14.703344648 +0000 UTC m=+48.787472925" Jan 30 14:13:15.304144 containerd[1721]: time="2025-01-30T14:13:15.303408269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:15.306482 containerd[1721]: time="2025-01-30T14:13:15.306437030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 14:13:15.313730 containerd[1721]: time="2025-01-30T14:13:15.313672594Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:15.318273 containerd[1721]: time="2025-01-30T14:13:15.318186477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:15.319049 containerd[1721]: time="2025-01-30T14:13:15.318898477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.27836299s" Jan 30 14:13:15.319049 containerd[1721]: time="2025-01-30T14:13:15.318938517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 14:13:15.322045 containerd[1721]: time="2025-01-30T14:13:15.321867359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:13:15.348086 containerd[1721]: time="2025-01-30T14:13:15.347775454Z" level=info msg="CreateContainer within sandbox \"7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:13:15.406256 containerd[1721]: time="2025-01-30T14:13:15.406131167Z" level=info msg="CreateContainer within sandbox \"7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"95a6608643daf9a1b328358a21e9b6bf56850b6d5afcfcea3dd7c7e468f3ce31\"" Jan 30 14:13:15.407553 containerd[1721]: time="2025-01-30T14:13:15.407507248Z" level=info msg="StartContainer for \"95a6608643daf9a1b328358a21e9b6bf56850b6d5afcfcea3dd7c7e468f3ce31\"" Jan 30 14:13:15.435533 systemd[1]: Started cri-containerd-95a6608643daf9a1b328358a21e9b6bf56850b6d5afcfcea3dd7c7e468f3ce31.scope - libcontainer container 95a6608643daf9a1b328358a21e9b6bf56850b6d5afcfcea3dd7c7e468f3ce31. Jan 30 14:13:15.712207 containerd[1721]: time="2025-01-30T14:13:15.711491740Z" level=info msg="StartContainer for \"95a6608643daf9a1b328358a21e9b6bf56850b6d5afcfcea3dd7c7e468f3ce31\" returns successfully" Jan 30 14:13:15.754339 containerd[1721]: time="2025-01-30T14:13:15.754285324Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:15.757256 containerd[1721]: time="2025-01-30T14:13:15.757110046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:13:15.759569 containerd[1721]: time="2025-01-30T14:13:15.759524847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 437.615208ms" Jan 30 14:13:15.759569 containerd[1721]: time="2025-01-30T14:13:15.759568247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:13:15.761772 containerd[1721]: time="2025-01-30T14:13:15.761580808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:13:15.763034 containerd[1721]: time="2025-01-30T14:13:15.762958849Z" level=info msg="CreateContainer within sandbox \"1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:13:15.803742 containerd[1721]: time="2025-01-30T14:13:15.803644272Z" level=info msg="CreateContainer within sandbox \"1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"162853084aa696774fb3d05a08c85e7c356d4d98aff04c11359a8d8fa3a4af43\"" Jan 30 14:13:15.804521 containerd[1721]: time="2025-01-30T14:13:15.804399193Z" level=info msg="StartContainer for \"162853084aa696774fb3d05a08c85e7c356d4d98aff04c11359a8d8fa3a4af43\"" Jan 30 14:13:15.842493 systemd[1]: Started cri-containerd-162853084aa696774fb3d05a08c85e7c356d4d98aff04c11359a8d8fa3a4af43.scope - libcontainer container 162853084aa696774fb3d05a08c85e7c356d4d98aff04c11359a8d8fa3a4af43. Jan 30 14:13:15.883177 containerd[1721]: time="2025-01-30T14:13:15.883124517Z" level=info msg="StartContainer for \"162853084aa696774fb3d05a08c85e7c356d4d98aff04c11359a8d8fa3a4af43\" returns successfully" Jan 30 14:13:16.317490 kubelet[3190]: I0130 14:13:16.316593 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d47d89cdf-pw5ns" podStartSLOduration=28.700215819 podStartE2EDuration="34.316570363s" podCreationTimestamp="2025-01-30 14:12:42 +0000 UTC" firstStartedPulling="2025-01-30 14:13:09.705095775 +0000 UTC m=+43.789224052" lastFinishedPulling="2025-01-30 14:13:15.321450279 +0000 UTC m=+49.405578596" observedRunningTime="2025-01-30 14:13:16.313862761 +0000 UTC m=+50.397991038" watchObservedRunningTime="2025-01-30 14:13:16.316570363 +0000 UTC m=+50.400698640" Jan 30 14:13:16.367167 kubelet[3190]: I0130 14:13:16.367091 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598ff764fd-6p78s" podStartSLOduration=30.140588674 podStartE2EDuration="35.367069391s" podCreationTimestamp="2025-01-30 14:12:41 +0000 UTC" firstStartedPulling="2025-01-30 14:13:10.534007811 +0000 UTC m=+44.618136088" lastFinishedPulling="2025-01-30 14:13:15.760488528 +0000 UTC m=+49.844616805" observedRunningTime="2025-01-30 14:13:16.336918894 +0000 UTC m=+50.421047171" watchObservedRunningTime="2025-01-30 14:13:16.367069391 +0000 UTC m=+50.451197668" Jan 30 14:13:16.530461 systemd[1]: run-containerd-runc-k8s.io-95a6608643daf9a1b328358a21e9b6bf56850b6d5afcfcea3dd7c7e468f3ce31-runc.VwfFvF.mount: Deactivated successfully. Jan 30 14:13:17.300903 kubelet[3190]: I0130 14:13:17.300218 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:17.640242 containerd[1721]: time="2025-01-30T14:13:17.640089513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:17.642895 containerd[1721]: time="2025-01-30T14:13:17.642770475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 14:13:17.647513 containerd[1721]: time="2025-01-30T14:13:17.647445597Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:17.653958 containerd[1721]: time="2025-01-30T14:13:17.653896321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:17.655278 containerd[1721]: time="2025-01-30T14:13:17.655091282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.893471754s" Jan 30 14:13:17.655407 containerd[1721]: time="2025-01-30T14:13:17.655382282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 14:13:17.659438 containerd[1721]: time="2025-01-30T14:13:17.659406204Z" level=info msg="CreateContainer within sandbox \"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:13:17.703676 containerd[1721]: time="2025-01-30T14:13:17.703567989Z" level=info msg="CreateContainer within sandbox \"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f4d23be64a5af54f15f8f13bb86a3c972afa514902c586ca826d74d0f5372f09\"" Jan 30 14:13:17.704503 containerd[1721]: time="2025-01-30T14:13:17.704478070Z" level=info msg="StartContainer for \"f4d23be64a5af54f15f8f13bb86a3c972afa514902c586ca826d74d0f5372f09\"" Jan 30 14:13:17.739456 systemd[1]: Started cri-containerd-f4d23be64a5af54f15f8f13bb86a3c972afa514902c586ca826d74d0f5372f09.scope - libcontainer container f4d23be64a5af54f15f8f13bb86a3c972afa514902c586ca826d74d0f5372f09. Jan 30 14:13:17.790931 containerd[1721]: time="2025-01-30T14:13:17.790801359Z" level=info msg="StartContainer for \"f4d23be64a5af54f15f8f13bb86a3c972afa514902c586ca826d74d0f5372f09\" returns successfully" Jan 30 14:13:17.795357 containerd[1721]: time="2025-01-30T14:13:17.795301401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:13:19.150355 containerd[1721]: time="2025-01-30T14:13:19.150261929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:19.153984 containerd[1721]: time="2025-01-30T14:13:19.153944131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 14:13:19.158396 containerd[1721]: time="2025-01-30T14:13:19.158330374Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:19.163350 containerd[1721]: time="2025-01-30T14:13:19.163293217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:19.164094 containerd[1721]: time="2025-01-30T14:13:19.164065377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.368717456s" Jan 30 14:13:19.164337 containerd[1721]: time="2025-01-30T14:13:19.164183417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 14:13:19.167946 containerd[1721]: time="2025-01-30T14:13:19.167873659Z" level=info msg="CreateContainer within sandbox \"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:13:19.209977 containerd[1721]: time="2025-01-30T14:13:19.209925563Z" level=info msg="CreateContainer within sandbox \"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5bc433f8c05c69e0e640b0e7066e6e35de6806f78cb51f0669da943ce5edbf15\"" Jan 30 14:13:19.210827 containerd[1721]: time="2025-01-30T14:13:19.210787323Z" level=info msg="StartContainer for \"5bc433f8c05c69e0e640b0e7066e6e35de6806f78cb51f0669da943ce5edbf15\"" Jan 30 14:13:19.243691 systemd[1]: run-containerd-runc-k8s.io-5bc433f8c05c69e0e640b0e7066e6e35de6806f78cb51f0669da943ce5edbf15-runc.noUHAx.mount: Deactivated successfully. Jan 30 14:13:19.258439 systemd[1]: Started cri-containerd-5bc433f8c05c69e0e640b0e7066e6e35de6806f78cb51f0669da943ce5edbf15.scope - libcontainer container 5bc433f8c05c69e0e640b0e7066e6e35de6806f78cb51f0669da943ce5edbf15. Jan 30 14:13:19.358437 containerd[1721]: time="2025-01-30T14:13:19.358393407Z" level=info msg="StartContainer for \"5bc433f8c05c69e0e640b0e7066e6e35de6806f78cb51f0669da943ce5edbf15\" returns successfully" Jan 30 14:13:20.197110 kubelet[3190]: I0130 14:13:20.197070 3190 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:13:20.197110 kubelet[3190]: I0130 14:13:20.197116 3190 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:13:20.391364 kubelet[3190]: I0130 14:13:20.391203 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sc286" podStartSLOduration=29.869828228 podStartE2EDuration="38.391182873s" podCreationTimestamp="2025-01-30 14:12:42 +0000 UTC" firstStartedPulling="2025-01-30 14:13:10.643838453 +0000 UTC m=+44.727966730" lastFinishedPulling="2025-01-30 14:13:19.165193098 +0000 UTC m=+53.249321375" observedRunningTime="2025-01-30 14:13:20.388535511 +0000 UTC m=+54.472663788" watchObservedRunningTime="2025-01-30 14:13:20.391182873 +0000 UTC m=+54.475311150" Jan 30 14:13:25.991879 containerd[1721]: time="2025-01-30T14:13:25.991840448Z" level=info msg="StopPodSandbox for \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\"" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.033 [WARNING][5750] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7b1fe55d-d807-4720-87c5-71fc636c7ec3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7", Pod:"coredns-6f6b679f8f-wzldc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70107ec59bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.034 [INFO][5750] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.034 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" iface="eth0" netns="" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.034 [INFO][5750] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.034 [INFO][5750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.056 [INFO][5756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.056 [INFO][5756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.056 [INFO][5756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.066 [WARNING][5756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.066 [INFO][5756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.068 [INFO][5756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.072276 containerd[1721]: 2025-01-30 14:13:26.070 [INFO][5750] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.072820 containerd[1721]: time="2025-01-30T14:13:26.072290719Z" level=info msg="TearDown network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\" successfully" Jan 30 14:13:26.072820 containerd[1721]: time="2025-01-30T14:13:26.072319359Z" level=info msg="StopPodSandbox for \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\" returns successfully" Jan 30 14:13:26.072919 containerd[1721]: time="2025-01-30T14:13:26.072844679Z" level=info msg="RemovePodSandbox for \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\"" Jan 30 14:13:26.072919 containerd[1721]: time="2025-01-30T14:13:26.072873679Z" level=info msg="Forcibly stopping sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\"" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.111 [WARNING][5777] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7b1fe55d-d807-4720-87c5-71fc636c7ec3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"bb91dd9338f0e96082f0423b6514c3bcdb3f1976e0eca90222ebb6d5137a82b7", Pod:"coredns-6f6b679f8f-wzldc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70107ec59bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.111 [INFO][5777] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.111 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" iface="eth0" netns="" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.111 [INFO][5777] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.111 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.133 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.133 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.133 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.143 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.143 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" HandleID="k8s-pod-network.7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--wzldc-eth0" Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.145 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.147913 containerd[1721]: 2025-01-30 14:13:26.146 [INFO][5777] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf" Jan 30 14:13:26.148368 containerd[1721]: time="2025-01-30T14:13:26.147970985Z" level=info msg="TearDown network for sandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\" successfully" Jan 30 14:13:26.159023 containerd[1721]: time="2025-01-30T14:13:26.158970915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:13:26.159161 containerd[1721]: time="2025-01-30T14:13:26.159069435Z" level=info msg="RemovePodSandbox \"7d2f6893d372be159694f084de18f2b4046306ee1f5cc2d261093405ae425ecf\" returns successfully" Jan 30 14:13:26.159784 containerd[1721]: time="2025-01-30T14:13:26.159755196Z" level=info msg="StopPodSandbox for \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\"" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.200 [WARNING][5802] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0", GenerateName:"calico-kube-controllers-6d47d89cdf-", Namespace:"calico-system", SelfLink:"", UID:"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d47d89cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc", Pod:"calico-kube-controllers-6d47d89cdf-pw5ns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70ee31bcaca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.200 [INFO][5802] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.200 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" iface="eth0" netns="" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.200 [INFO][5802] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.200 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.219 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.220 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.220 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.228 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.229 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.230 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.233314 containerd[1721]: 2025-01-30 14:13:26.232 [INFO][5802] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.234418 containerd[1721]: time="2025-01-30T14:13:26.233369301Z" level=info msg="TearDown network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\" successfully" Jan 30 14:13:26.234418 containerd[1721]: time="2025-01-30T14:13:26.233428901Z" level=info msg="StopPodSandbox for \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\" returns successfully" Jan 30 14:13:26.234418 containerd[1721]: time="2025-01-30T14:13:26.233990541Z" level=info msg="RemovePodSandbox for \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\"" Jan 30 14:13:26.234418 containerd[1721]: time="2025-01-30T14:13:26.234022821Z" level=info msg="Forcibly stopping sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\"" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.272 [WARNING][5826] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0", GenerateName:"calico-kube-controllers-6d47d89cdf-", Namespace:"calico-system", SelfLink:"", UID:"3b6c2c7a-4ead-45d0-9c40-04e67f8b365b", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d47d89cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"7745911b391fda598beec74d4a9383a160fab10456f8c2dd5b1088239d65b4bc", Pod:"calico-kube-controllers-6d47d89cdf-pw5ns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70ee31bcaca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.273 [INFO][5826] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.273 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" iface="eth0" netns="" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.273 [INFO][5826] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.273 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.295 [INFO][5832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.296 [INFO][5832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.297 [INFO][5832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.309 [WARNING][5832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.309 [INFO][5832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" HandleID="k8s-pod-network.3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Workload="ci--4081.3.0--a--1247579205-k8s-calico--kube--controllers--6d47d89cdf--pw5ns-eth0" Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.311 [INFO][5832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.315039 containerd[1721]: 2025-01-30 14:13:26.313 [INFO][5826] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386" Jan 30 14:13:26.315039 containerd[1721]: time="2025-01-30T14:13:26.314945812Z" level=info msg="TearDown network for sandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\" successfully" Jan 30 14:13:26.325115 containerd[1721]: time="2025-01-30T14:13:26.324317861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:13:26.325115 containerd[1721]: time="2025-01-30T14:13:26.324406461Z" level=info msg="RemovePodSandbox \"3fb421ec47731b314de062b2a2522d1a1e4b0c1c492369260ed4fb81e0b1a386\" returns successfully" Jan 30 14:13:26.325115 containerd[1721]: time="2025-01-30T14:13:26.324864141Z" level=info msg="StopPodSandbox for \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\"" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.364 [WARNING][5850] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8f59b1d6-1b79-481b-b739-8d25c393b80d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a", Pod:"coredns-6f6b679f8f-t4568", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali178ca8f796a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.365 [INFO][5850] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.365 [INFO][5850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" iface="eth0" netns="" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.365 [INFO][5850] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.365 [INFO][5850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.387 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.387 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.387 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.396 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.396 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.398 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.401121 containerd[1721]: 2025-01-30 14:13:26.399 [INFO][5850] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.401934 containerd[1721]: time="2025-01-30T14:13:26.401164688Z" level=info msg="TearDown network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\" successfully" Jan 30 14:13:26.401934 containerd[1721]: time="2025-01-30T14:13:26.401192048Z" level=info msg="StopPodSandbox for \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\" returns successfully" Jan 30 14:13:26.401934 containerd[1721]: time="2025-01-30T14:13:26.401677609Z" level=info msg="RemovePodSandbox for \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\"" Jan 30 14:13:26.401934 containerd[1721]: time="2025-01-30T14:13:26.401707129Z" level=info msg="Forcibly stopping sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\"" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.436 [WARNING][5874] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8f59b1d6-1b79-481b-b739-8d25c393b80d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"362f36bc3034de914fc1893fa2000380937ae6d87146aff99c6e6a49aa9f329a", Pod:"coredns-6f6b679f8f-t4568", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali178ca8f796a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.436 [INFO][5874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.436 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" iface="eth0" netns="" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.436 [INFO][5874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.436 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.454 [INFO][5880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.454 [INFO][5880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.454 [INFO][5880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.462 [WARNING][5880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.462 [INFO][5880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" HandleID="k8s-pod-network.192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Workload="ci--4081.3.0--a--1247579205-k8s-coredns--6f6b679f8f--t4568-eth0" Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.464 [INFO][5880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.467047 containerd[1721]: 2025-01-30 14:13:26.465 [INFO][5874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a" Jan 30 14:13:26.467493 containerd[1721]: time="2025-01-30T14:13:26.467096906Z" level=info msg="TearDown network for sandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\" successfully" Jan 30 14:13:26.483064 containerd[1721]: time="2025-01-30T14:13:26.482975080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:13:26.483064 containerd[1721]: time="2025-01-30T14:13:26.483050960Z" level=info msg="RemovePodSandbox \"192d6df3eac48bf43ebee517beda9a41e32b7f5abd3008618296f9ad9b5a4c9a\" returns successfully" Jan 30 14:13:26.484076 containerd[1721]: time="2025-01-30T14:13:26.483822041Z" level=info msg="StopPodSandbox for \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\"" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.519 [WARNING][5898] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b4f3c05-51a8-4482-9097-9558572b9148", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805", Pod:"calico-apiserver-598ff764fd-v4h8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8273b7948", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.519 [INFO][5898] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.519 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" iface="eth0" netns="" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.519 [INFO][5898] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.519 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.539 [INFO][5904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.539 [INFO][5904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.539 [INFO][5904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.548 [WARNING][5904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.548 [INFO][5904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.550 [INFO][5904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.553325 containerd[1721]: 2025-01-30 14:13:26.551 [INFO][5898] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.554001 containerd[1721]: time="2025-01-30T14:13:26.553360502Z" level=info msg="TearDown network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\" successfully" Jan 30 14:13:26.554001 containerd[1721]: time="2025-01-30T14:13:26.553386342Z" level=info msg="StopPodSandbox for \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\" returns successfully" Jan 30 14:13:26.554983 containerd[1721]: time="2025-01-30T14:13:26.554639183Z" level=info msg="RemovePodSandbox for \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\"" Jan 30 14:13:26.554983 containerd[1721]: time="2025-01-30T14:13:26.554676303Z" level=info msg="Forcibly stopping sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\"" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.590 [WARNING][5922] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b4f3c05-51a8-4482-9097-9558572b9148", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"7ef545b36795d4d7a5c77e13745a9a0825aab56cea03cb8a6908fcc6621a0805", Pod:"calico-apiserver-598ff764fd-v4h8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8273b7948", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.591 [INFO][5922] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.591 [INFO][5922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" iface="eth0" netns="" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.591 [INFO][5922] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.591 [INFO][5922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.609 [INFO][5928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.609 [INFO][5928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.609 [INFO][5928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.619 [WARNING][5928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.619 [INFO][5928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" HandleID="k8s-pod-network.8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--v4h8w-eth0" Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.621 [INFO][5928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.624767 containerd[1721]: 2025-01-30 14:13:26.623 [INFO][5922] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084" Jan 30 14:13:26.625913 containerd[1721]: time="2025-01-30T14:13:26.624838765Z" level=info msg="TearDown network for sandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\" successfully" Jan 30 14:13:26.636662 containerd[1721]: time="2025-01-30T14:13:26.636610696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:13:26.637006 containerd[1721]: time="2025-01-30T14:13:26.636857136Z" level=info msg="RemovePodSandbox \"8343f90d824f047323d27d4542d38546edf029c27aa1de2ee096c276d8436084\" returns successfully" Jan 30 14:13:26.637635 containerd[1721]: time="2025-01-30T14:13:26.637502816Z" level=info msg="StopPodSandbox for \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\"" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.674 [WARNING][5946] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce", Pod:"calico-apiserver-598ff764fd-6p78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65214a4fc9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.674 [INFO][5946] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.674 [INFO][5946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" iface="eth0" netns="" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.674 [INFO][5946] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.674 [INFO][5946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.694 [INFO][5952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.694 [INFO][5952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.694 [INFO][5952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.703 [WARNING][5952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.703 [INFO][5952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.705 [INFO][5952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.708087 containerd[1721]: 2025-01-30 14:13:26.706 [INFO][5946] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.708087 containerd[1721]: time="2025-01-30T14:13:26.707976958Z" level=info msg="TearDown network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\" successfully" Jan 30 14:13:26.708087 containerd[1721]: time="2025-01-30T14:13:26.708005758Z" level=info msg="StopPodSandbox for \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\" returns successfully" Jan 30 14:13:26.709028 containerd[1721]: time="2025-01-30T14:13:26.708506759Z" level=info msg="RemovePodSandbox for \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\"" Jan 30 14:13:26.709028 containerd[1721]: time="2025-01-30T14:13:26.708535919Z" level=info msg="Forcibly stopping sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\"" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.747 [WARNING][5971] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0", GenerateName:"calico-apiserver-598ff764fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0a55dbb-d35d-47fc-9896-5a8a6fe5ec9e", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598ff764fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"1880a956782869675d0666fb6ebb2304e1673c96a0305aaef9769386d0d31cce", Pod:"calico-apiserver-598ff764fd-6p78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65214a4fc9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.747 [INFO][5971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.747 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" iface="eth0" netns="" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.747 [INFO][5971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.747 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.766 [INFO][5977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.766 [INFO][5977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.766 [INFO][5977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.776 [WARNING][5977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.776 [INFO][5977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" HandleID="k8s-pod-network.b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Workload="ci--4081.3.0--a--1247579205-k8s-calico--apiserver--598ff764fd--6p78s-eth0" Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.777 [INFO][5977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.780378 containerd[1721]: 2025-01-30 14:13:26.778 [INFO][5971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb" Jan 30 14:13:26.780789 containerd[1721]: time="2025-01-30T14:13:26.780420942Z" level=info msg="TearDown network for sandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\" successfully" Jan 30 14:13:26.789628 containerd[1721]: time="2025-01-30T14:13:26.789574350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:13:26.789746 containerd[1721]: time="2025-01-30T14:13:26.789657190Z" level=info msg="RemovePodSandbox \"b8ed67a88900e99a004d9397f02942b8cf0b21896a0f8d4504a3f3d2eb965eeb\" returns successfully" Jan 30 14:13:26.790264 containerd[1721]: time="2025-01-30T14:13:26.790219071Z" level=info msg="StopPodSandbox for \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\"" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.826 [WARNING][5995] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9434d060-dc38-470e-9a84-12438e405d36", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0", Pod:"csi-node-driver-sc286", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae921bfec70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.826 [INFO][5995] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.826 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" iface="eth0" netns="" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.826 [INFO][5995] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.826 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.847 [INFO][6002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.847 [INFO][6002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.847 [INFO][6002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.858 [WARNING][6002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.858 [INFO][6002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.860 [INFO][6002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.862910 containerd[1721]: 2025-01-30 14:13:26.861 [INFO][5995] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.863574 containerd[1721]: time="2025-01-30T14:13:26.862925655Z" level=info msg="TearDown network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\" successfully" Jan 30 14:13:26.863574 containerd[1721]: time="2025-01-30T14:13:26.862973055Z" level=info msg="StopPodSandbox for \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\" returns successfully" Jan 30 14:13:26.864250 containerd[1721]: time="2025-01-30T14:13:26.864127776Z" level=info msg="RemovePodSandbox for \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\"" Jan 30 14:13:26.864250 containerd[1721]: time="2025-01-30T14:13:26.864166056Z" level=info msg="Forcibly stopping sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\"" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.906 [WARNING][6021] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9434d060-dc38-470e-9a84-12438e405d36", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-1247579205", ContainerID:"4d2c364daf5e35785b0aa2f31059cacb78141e9fea8469b6aa0cb7b6d949f9e0", Pod:"csi-node-driver-sc286", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae921bfec70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.906 [INFO][6021] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.906 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" iface="eth0" netns="" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.906 [INFO][6021] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.906 [INFO][6021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.927 [INFO][6027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.927 [INFO][6027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.927 [INFO][6027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.935 [WARNING][6027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.935 [INFO][6027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" HandleID="k8s-pod-network.d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Workload="ci--4081.3.0--a--1247579205-k8s-csi--node--driver--sc286-eth0" Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.937 [INFO][6027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:13:26.939912 containerd[1721]: 2025-01-30 14:13:26.938 [INFO][6021] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb" Jan 30 14:13:26.939912 containerd[1721]: time="2025-01-30T14:13:26.939884762Z" level=info msg="TearDown network for sandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\" successfully" Jan 30 14:13:26.947785 containerd[1721]: time="2025-01-30T14:13:26.947711009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:13:26.948060 containerd[1721]: time="2025-01-30T14:13:26.947798289Z" level=info msg="RemovePodSandbox \"d2ffab3d674b49c765ccd1a8854fc9450dce471e798ceb20d7625fc20f1981cb\" returns successfully" Jan 30 14:13:33.151753 kubelet[3190]: I0130 14:13:33.151710 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:50.787446 kubelet[3190]: I0130 14:13:50.787197 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:14:21.228621 systemd[1]: Started sshd@7-10.200.20.19:22-10.200.16.10:51472.service - OpenSSH per-connection server daemon (10.200.16.10:51472). Jan 30 14:14:21.692700 sshd[6148]: Accepted publickey for core from 10.200.16.10 port 51472 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:21.695663 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:21.700385 systemd-logind[1694]: New session 10 of user core. Jan 30 14:14:21.708435 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:14:22.110644 sshd[6148]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:22.114673 systemd[1]: sshd@7-10.200.20.19:22-10.200.16.10:51472.service: Deactivated successfully. Jan 30 14:14:22.117520 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:14:22.121099 systemd-logind[1694]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:14:22.122167 systemd-logind[1694]: Removed session 10. Jan 30 14:14:27.192556 systemd[1]: Started sshd@8-10.200.20.19:22-10.200.16.10:38052.service - OpenSSH per-connection server daemon (10.200.16.10:38052). Jan 30 14:14:27.630578 sshd[6186]: Accepted publickey for core from 10.200.16.10 port 38052 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:27.631739 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:27.636517 systemd-logind[1694]: New session 11 of user core. Jan 30 14:14:27.642410 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:14:28.016965 sshd[6186]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:28.020495 systemd[1]: sshd@8-10.200.20.19:22-10.200.16.10:38052.service: Deactivated successfully. Jan 30 14:14:28.022620 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:14:28.024894 systemd-logind[1694]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:14:28.026114 systemd-logind[1694]: Removed session 11. Jan 30 14:14:33.103498 systemd[1]: Started sshd@9-10.200.20.19:22-10.200.16.10:38062.service - OpenSSH per-connection server daemon (10.200.16.10:38062). Jan 30 14:14:33.549550 sshd[6208]: Accepted publickey for core from 10.200.16.10 port 38062 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:33.551020 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:33.555285 systemd-logind[1694]: New session 12 of user core. Jan 30 14:14:33.559407 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:14:33.952528 sshd[6208]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:33.956043 systemd[1]: sshd@9-10.200.20.19:22-10.200.16.10:38062.service: Deactivated successfully. Jan 30 14:14:33.956193 systemd-logind[1694]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:14:33.958646 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:14:33.961738 systemd-logind[1694]: Removed session 12. Jan 30 14:14:34.031605 systemd[1]: Started sshd@10-10.200.20.19:22-10.200.16.10:38068.service - OpenSSH per-connection server daemon (10.200.16.10:38068). Jan 30 14:14:34.468610 sshd[6221]: Accepted publickey for core from 10.200.16.10 port 38068 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:34.470056 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:34.475566 systemd-logind[1694]: New session 13 of user core. Jan 30 14:14:34.479409 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:14:34.882464 sshd[6221]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:34.885611 systemd[1]: sshd@10-10.200.20.19:22-10.200.16.10:38068.service: Deactivated successfully. Jan 30 14:14:34.886280 systemd-logind[1694]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:14:34.889096 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:14:34.891627 systemd-logind[1694]: Removed session 13. Jan 30 14:14:34.965492 systemd[1]: Started sshd@11-10.200.20.19:22-10.200.16.10:38084.service - OpenSSH per-connection server daemon (10.200.16.10:38084). Jan 30 14:14:35.394798 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 38084 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:35.396201 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:35.400298 systemd-logind[1694]: New session 14 of user core. Jan 30 14:14:35.405412 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:14:35.787850 sshd[6233]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:35.791958 systemd-logind[1694]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:14:35.792161 systemd[1]: sshd@11-10.200.20.19:22-10.200.16.10:38084.service: Deactivated successfully. Jan 30 14:14:35.794860 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:14:35.798733 systemd-logind[1694]: Removed session 14. Jan 30 14:14:40.877525 systemd[1]: Started sshd@12-10.200.20.19:22-10.200.16.10:49234.service - OpenSSH per-connection server daemon (10.200.16.10:49234). Jan 30 14:14:41.306819 sshd[6250]: Accepted publickey for core from 10.200.16.10 port 49234 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:41.308243 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:41.312033 systemd-logind[1694]: New session 15 of user core. Jan 30 14:14:41.319407 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:14:41.696795 sshd[6250]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:41.700497 systemd[1]: sshd@12-10.200.20.19:22-10.200.16.10:49234.service: Deactivated successfully. Jan 30 14:14:41.702611 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:14:41.703457 systemd-logind[1694]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:14:41.704711 systemd-logind[1694]: Removed session 15. Jan 30 14:14:46.779532 systemd[1]: Started sshd@13-10.200.20.19:22-10.200.16.10:47110.service - OpenSSH per-connection server daemon (10.200.16.10:47110). Jan 30 14:14:47.208037 sshd[6302]: Accepted publickey for core from 10.200.16.10 port 47110 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:47.209987 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:47.215891 systemd-logind[1694]: New session 16 of user core. Jan 30 14:14:47.225448 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:14:47.597190 sshd[6302]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:47.601169 systemd[1]: sshd@13-10.200.20.19:22-10.200.16.10:47110.service: Deactivated successfully. Jan 30 14:14:47.603845 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:14:47.605642 systemd-logind[1694]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:14:47.606750 systemd-logind[1694]: Removed session 16. Jan 30 14:14:52.676860 systemd[1]: Started sshd@14-10.200.20.19:22-10.200.16.10:47126.service - OpenSSH per-connection server daemon (10.200.16.10:47126). Jan 30 14:14:53.117635 sshd[6315]: Accepted publickey for core from 10.200.16.10 port 47126 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:53.118981 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:53.123183 systemd-logind[1694]: New session 17 of user core. Jan 30 14:14:53.127396 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:14:53.513394 sshd[6315]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:53.518268 systemd[1]: sshd@14-10.200.20.19:22-10.200.16.10:47126.service: Deactivated successfully. Jan 30 14:14:53.521452 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:14:53.522733 systemd-logind[1694]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:14:53.524585 systemd-logind[1694]: Removed session 17. Jan 30 14:14:58.595574 systemd[1]: Started sshd@15-10.200.20.19:22-10.200.16.10:45764.service - OpenSSH per-connection server daemon (10.200.16.10:45764). Jan 30 14:14:59.022390 sshd[6347]: Accepted publickey for core from 10.200.16.10 port 45764 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:59.023852 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:59.027944 systemd-logind[1694]: New session 18 of user core. Jan 30 14:14:59.037428 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:14:59.411839 sshd[6347]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:59.416435 systemd[1]: sshd@15-10.200.20.19:22-10.200.16.10:45764.service: Deactivated successfully. Jan 30 14:14:59.419076 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:14:59.421201 systemd-logind[1694]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:14:59.422376 systemd-logind[1694]: Removed session 18. Jan 30 14:14:59.495785 systemd[1]: Started sshd@16-10.200.20.19:22-10.200.16.10:45768.service - OpenSSH per-connection server daemon (10.200.16.10:45768). Jan 30 14:14:59.922182 sshd[6360]: Accepted publickey for core from 10.200.16.10 port 45768 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:59.923566 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:59.927616 systemd-logind[1694]: New session 19 of user core. Jan 30 14:14:59.938421 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:15:00.411630 sshd[6360]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:00.415559 systemd[1]: sshd@16-10.200.20.19:22-10.200.16.10:45768.service: Deactivated successfully. Jan 30 14:15:00.417718 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:15:00.418482 systemd-logind[1694]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:15:00.419761 systemd-logind[1694]: Removed session 19. Jan 30 14:15:00.500520 systemd[1]: Started sshd@17-10.200.20.19:22-10.200.16.10:45784.service - OpenSSH per-connection server daemon (10.200.16.10:45784). Jan 30 14:15:00.930511 sshd[6371]: Accepted publickey for core from 10.200.16.10 port 45784 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:00.931959 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:00.936973 systemd-logind[1694]: New session 20 of user core. Jan 30 14:15:00.946432 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:15:03.026061 sshd[6371]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:03.030501 systemd[1]: sshd@17-10.200.20.19:22-10.200.16.10:45784.service: Deactivated successfully. Jan 30 14:15:03.032475 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:15:03.034097 systemd-logind[1694]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:15:03.035563 systemd-logind[1694]: Removed session 20. Jan 30 14:15:03.113529 systemd[1]: Started sshd@18-10.200.20.19:22-10.200.16.10:45786.service - OpenSSH per-connection server daemon (10.200.16.10:45786). Jan 30 14:15:03.539951 sshd[6389]: Accepted publickey for core from 10.200.16.10 port 45786 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:03.541400 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:03.546140 systemd-logind[1694]: New session 21 of user core. Jan 30 14:15:03.557595 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:15:04.051478 sshd[6389]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:04.055084 systemd-logind[1694]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:15:04.055735 systemd[1]: sshd@18-10.200.20.19:22-10.200.16.10:45786.service: Deactivated successfully. Jan 30 14:15:04.058777 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:15:04.060148 systemd-logind[1694]: Removed session 21. Jan 30 14:15:04.134542 systemd[1]: Started sshd@19-10.200.20.19:22-10.200.16.10:45794.service - OpenSSH per-connection server daemon (10.200.16.10:45794). Jan 30 14:15:04.562879 sshd[6400]: Accepted publickey for core from 10.200.16.10 port 45794 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:04.564281 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:04.568168 systemd-logind[1694]: New session 22 of user core. Jan 30 14:15:04.577386 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:15:04.948445 sshd[6400]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:04.952377 systemd[1]: sshd@19-10.200.20.19:22-10.200.16.10:45794.service: Deactivated successfully. Jan 30 14:15:04.955241 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:15:04.956258 systemd-logind[1694]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:15:04.957625 systemd-logind[1694]: Removed session 22. Jan 30 14:15:10.027735 systemd[1]: Started sshd@20-10.200.20.19:22-10.200.16.10:56490.service - OpenSSH per-connection server daemon (10.200.16.10:56490). Jan 30 14:15:10.459687 sshd[6417]: Accepted publickey for core from 10.200.16.10 port 56490 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:10.461078 sshd[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:10.465496 systemd-logind[1694]: New session 23 of user core. Jan 30 14:15:10.470409 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:15:10.844490 sshd[6417]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:10.848703 systemd[1]: sshd@20-10.200.20.19:22-10.200.16.10:56490.service: Deactivated successfully. Jan 30 14:15:10.850636 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:15:10.851389 systemd-logind[1694]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:15:10.854413 systemd-logind[1694]: Removed session 23. Jan 30 14:15:15.930408 systemd[1]: Started sshd@21-10.200.20.19:22-10.200.16.10:42132.service - OpenSSH per-connection server daemon (10.200.16.10:42132). Jan 30 14:15:16.360675 sshd[6471]: Accepted publickey for core from 10.200.16.10 port 42132 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:16.362923 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:16.368347 systemd-logind[1694]: New session 24 of user core. Jan 30 14:15:16.376422 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:15:16.748981 sshd[6471]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:16.752426 systemd[1]: sshd@21-10.200.20.19:22-10.200.16.10:42132.service: Deactivated successfully. Jan 30 14:15:16.754780 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:15:16.755730 systemd-logind[1694]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:15:16.757014 systemd-logind[1694]: Removed session 24. Jan 30 14:15:21.831535 systemd[1]: Started sshd@22-10.200.20.19:22-10.200.16.10:42134.service - OpenSSH per-connection server daemon (10.200.16.10:42134). Jan 30 14:15:22.259735 sshd[6484]: Accepted publickey for core from 10.200.16.10 port 42134 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:22.261298 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:22.265278 systemd-logind[1694]: New session 25 of user core. Jan 30 14:15:22.274417 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:15:22.637523 sshd[6484]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:22.640671 systemd[1]: sshd@22-10.200.20.19:22-10.200.16.10:42134.service: Deactivated successfully. Jan 30 14:15:22.643282 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:15:22.644960 systemd-logind[1694]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:15:22.646358 systemd-logind[1694]: Removed session 25. Jan 30 14:15:27.719522 systemd[1]: Started sshd@23-10.200.20.19:22-10.200.16.10:57942.service - OpenSSH per-connection server daemon (10.200.16.10:57942). Jan 30 14:15:28.146022 sshd[6517]: Accepted publickey for core from 10.200.16.10 port 57942 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:28.147161 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:28.151581 systemd-logind[1694]: New session 26 of user core. Jan 30 14:15:28.155466 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:15:28.531472 sshd[6517]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:28.534926 systemd[1]: sshd@23-10.200.20.19:22-10.200.16.10:57942.service: Deactivated successfully. Jan 30 14:15:28.536906 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:15:28.539208 systemd-logind[1694]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:15:28.540812 systemd-logind[1694]: Removed session 26. Jan 30 14:15:33.614410 systemd[1]: Started sshd@24-10.200.20.19:22-10.200.16.10:57950.service - OpenSSH per-connection server daemon (10.200.16.10:57950). Jan 30 14:15:34.073396 sshd[6530]: Accepted publickey for core from 10.200.16.10 port 57950 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:34.074955 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:34.079720 systemd-logind[1694]: New session 27 of user core. Jan 30 14:15:34.086425 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:15:34.479882 sshd[6530]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:34.483454 systemd-logind[1694]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:15:34.484192 systemd[1]: sshd@24-10.200.20.19:22-10.200.16.10:57950.service: Deactivated successfully. Jan 30 14:15:34.486862 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:15:34.489645 systemd-logind[1694]: Removed session 27. Jan 30 14:15:39.566556 systemd[1]: Started sshd@25-10.200.20.19:22-10.200.16.10:60384.service - OpenSSH per-connection server daemon (10.200.16.10:60384). Jan 30 14:15:40.013483 sshd[6544]: Accepted publickey for core from 10.200.16.10 port 60384 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:40.015049 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:40.020302 systemd-logind[1694]: New session 28 of user core. Jan 30 14:15:40.030714 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:15:40.419194 sshd[6544]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:40.424103 systemd-logind[1694]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:15:40.425179 systemd[1]: sshd@25-10.200.20.19:22-10.200.16.10:60384.service: Deactivated successfully. Jan 30 14:15:40.429006 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:15:40.431547 systemd-logind[1694]: Removed session 28.