Jan 30 14:10:16.331318 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:10:16.331340 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:10:16.331348 kernel: KASLR enabled Jan 30 14:10:16.331354 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 14:10:16.331361 kernel: printk: bootconsole [pl11] enabled Jan 30 14:10:16.331367 kernel: efi: EFI v2.7 by EDK II Jan 30 14:10:16.331374 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 30 14:10:16.331380 kernel: random: crng init done Jan 30 14:10:16.331386 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:16.331392 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 14:10:16.331398 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331404 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331412 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 14:10:16.331419 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331426 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331433 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331439 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331447 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331454 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331460 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 14:10:16.331467 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331473 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 14:10:16.331479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 14:10:16.331486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 14:10:16.331492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 14:10:16.331499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 14:10:16.331505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 14:10:16.331512 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 14:10:16.331519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 14:10:16.331526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 14:10:16.331532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 14:10:16.331539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 14:10:16.331545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 14:10:16.331552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 14:10:16.331558 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 14:10:16.331564 kernel: Zone ranges: Jan 30 14:10:16.331571 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 14:10:16.331696 kernel: DMA32 empty Jan 30 14:10:16.331703 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:16.331710 kernel: Movable zone start for each node Jan 30 14:10:16.331722 kernel: Early memory node ranges Jan 30 14:10:16.331729 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 14:10:16.331735 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 30 14:10:16.331742 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 14:10:16.331749 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 14:10:16.331757 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 14:10:16.331764 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 14:10:16.331771 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:16.331778 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 14:10:16.331784 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 14:10:16.331791 kernel: psci: probing for conduit method from ACPI. Jan 30 14:10:16.331798 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:10:16.331805 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:10:16.331811 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 14:10:16.331818 kernel: psci: SMC Calling Convention v1.4 Jan 30 14:10:16.331825 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 14:10:16.331831 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 14:10:16.331840 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:10:16.331847 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:10:16.331854 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:10:16.331860 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:10:16.331867 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:10:16.331874 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:10:16.331881 kernel: CPU features: detected: Spectre-BHB Jan 30 14:10:16.331888 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:10:16.331894 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:10:16.331901 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:10:16.331908 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 14:10:16.331916 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:10:16.331923 kernel: alternatives: applying boot alternatives Jan 30 14:10:16.331931 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:16.331939 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:16.331946 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:10:16.331953 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:16.331960 kernel: Fallback order for Node 0: 0 Jan 30 14:10:16.331966 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 14:10:16.331973 kernel: Policy zone: Normal Jan 30 14:10:16.331980 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:16.331986 kernel: software IO TLB: area num 2. Jan 30 14:10:16.331995 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 30 14:10:16.332002 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 30 14:10:16.332009 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:10:16.332016 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:16.332023 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:16.332030 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:10:16.332037 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:16.332044 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:16.332050 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:16.332057 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:10:16.332064 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:10:16.332072 kernel: GICv3: 960 SPIs implemented Jan 30 14:10:16.332079 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:10:16.332086 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:10:16.332093 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:10:16.332099 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 14:10:16.332106 kernel: ITS: No ITS available, not enabling LPIs Jan 30 14:10:16.332113 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:16.332120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:16.332127 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:10:16.332134 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:10:16.332141 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:10:16.332149 kernel: Console: colour dummy device 80x25 Jan 30 14:10:16.332156 kernel: printk: console [tty1] enabled Jan 30 14:10:16.332163 kernel: ACPI: Core revision 20230628 Jan 30 14:10:16.332170 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:10:16.332178 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:16.332184 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:16.332191 kernel: landlock: Up and running. Jan 30 14:10:16.332198 kernel: SELinux: Initializing. Jan 30 14:10:16.332205 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332212 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332221 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:16.332228 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:16.332243 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 14:10:16.332259 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 14:10:16.332267 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 14:10:16.332274 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:16.332281 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:16.332295 kernel: Remapping and enabling EFI services. Jan 30 14:10:16.332303 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:16.332310 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:10:16.332317 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 14:10:16.332326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:16.332334 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:10:16.332341 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:10:16.332348 kernel: SMP: Total of 2 processors activated. Jan 30 14:10:16.332356 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:10:16.332365 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 14:10:16.332372 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:10:16.332379 kernel: CPU features: detected: CRC32 instructions Jan 30 14:10:16.332387 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:10:16.332394 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:10:16.332401 kernel: CPU features: detected: Privileged Access Never Jan 30 14:10:16.332409 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:10:16.332416 kernel: alternatives: applying system-wide alternatives Jan 30 14:10:16.332423 kernel: devtmpfs: initialized Jan 30 14:10:16.332432 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:16.332439 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:10:16.332447 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:16.332454 kernel: SMBIOS 3.1.0 present. Jan 30 14:10:16.332461 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 14:10:16.332468 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:16.332476 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:10:16.332483 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:10:16.332491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:10:16.332499 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:16.332507 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:16.332514 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:16.332521 kernel: cpuidle: using governor menu Jan 30 14:10:16.332529 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:10:16.332536 kernel: ASID allocator initialised with 32768 entries Jan 30 14:10:16.332544 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:16.332551 kernel: Serial: AMBA PL011 UART driver Jan 30 14:10:16.332559 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:10:16.332568 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:10:16.332581 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:10:16.332598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:16.332605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:16.332613 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:10:16.332620 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:10:16.332627 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:16.332635 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:16.332642 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:10:16.332651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:10:16.332659 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:16.332666 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:16.332673 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:16.332680 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:16.332688 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:10:16.332695 kernel: ACPI: Interpreter enabled Jan 30 14:10:16.332702 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:10:16.332710 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:10:16.332719 kernel: printk: console [ttyAMA0] enabled Jan 30 14:10:16.332726 kernel: printk: bootconsole [pl11] disabled Jan 30 14:10:16.332733 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 14:10:16.332741 kernel: iommu: Default domain type: Translated Jan 30 14:10:16.332748 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:10:16.332755 kernel: efivars: Registered efivars operations Jan 30 14:10:16.332763 kernel: vgaarb: loaded Jan 30 14:10:16.332770 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:10:16.332777 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:16.332786 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:16.332793 kernel: pnp: PnP ACPI init Jan 30 14:10:16.332801 kernel: pnp: PnP ACPI: found 0 devices Jan 30 14:10:16.332808 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:16.332815 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:16.332823 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:10:16.332830 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:16.332838 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:10:16.332845 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:10:16.332854 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:10:16.332861 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332869 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332876 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:16.332883 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:10:16.332891 kernel: kvm [1]: HYP mode not available Jan 30 14:10:16.332898 kernel: Initialise system trusted keyrings Jan 30 14:10:16.332905 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:10:16.332912 kernel: Key type asymmetric registered Jan 30 14:10:16.332921 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:16.332928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:10:16.332936 kernel: io scheduler mq-deadline registered Jan 30 14:10:16.332943 kernel: io scheduler kyber registered Jan 30 14:10:16.332950 kernel: io scheduler bfq registered Jan 30 14:10:16.332958 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:16.332965 kernel: thunder_xcv, ver 1.0 Jan 30 14:10:16.332972 kernel: thunder_bgx, ver 1.0 Jan 30 14:10:16.332980 kernel: nicpf, ver 1.0 Jan 30 14:10:16.332987 kernel: nicvf, ver 1.0 Jan 30 14:10:16.333151 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:10:16.333225 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:10:15 UTC (1738246215) Jan 30 14:10:16.333235 kernel: efifb: probing for efifb Jan 30 14:10:16.333243 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 14:10:16.333250 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 14:10:16.333257 kernel: efifb: scrolling: redraw Jan 30 14:10:16.333265 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 14:10:16.333275 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:16.333282 kernel: fb0: EFI VGA frame buffer device Jan 30 14:10:16.333290 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 14:10:16.333297 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:16.333305 kernel: No ACPI PMU IRQ for CPU0 Jan 30 14:10:16.333312 kernel: No ACPI PMU IRQ for CPU1 Jan 30 14:10:16.333319 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 14:10:16.333326 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:10:16.333334 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:10:16.333343 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:16.333350 kernel: Segment Routing with IPv6 Jan 30 14:10:16.333357 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:16.333365 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:16.333372 kernel: Key type dns_resolver registered Jan 30 14:10:16.333379 kernel: registered taskstats version 1 Jan 30 14:10:16.333387 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:16.333394 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:10:16.333401 kernel: Key type .fscrypt registered Jan 30 14:10:16.333410 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:16.333418 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:10:16.333425 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:16.333433 kernel: ima: No architecture policies found Jan 30 14:10:16.333440 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:10:16.333447 kernel: clk: Disabling unused clocks Jan 30 14:10:16.333455 kernel: Freeing unused kernel memory: 39360K Jan 30 14:10:16.333462 kernel: Run /init as init process Jan 30 14:10:16.333469 kernel: with arguments: Jan 30 14:10:16.333478 kernel: /init Jan 30 14:10:16.333485 kernel: with environment: Jan 30 14:10:16.333492 kernel: HOME=/ Jan 30 14:10:16.333499 kernel: TERM=linux Jan 30 14:10:16.333506 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:16.333515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:16.333525 systemd[1]: Detected virtualization microsoft. Jan 30 14:10:16.333533 systemd[1]: Detected architecture arm64. Jan 30 14:10:16.333542 systemd[1]: Running in initrd. Jan 30 14:10:16.333549 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:16.333557 systemd[1]: Hostname set to . Jan 30 14:10:16.333565 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:16.333573 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:16.333635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:16.333644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:16.333652 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:16.333663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:16.333671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:16.333679 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:16.333688 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:16.333697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:16.333705 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:16.333714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:16.333722 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:16.333730 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:16.333738 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:16.333745 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:16.333753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:16.333761 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:16.333769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:16.333777 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:16.333787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:16.333795 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:16.333803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:16.333810 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:16.333818 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:16.333827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:16.333834 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:16.333842 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:16.333850 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:16.333859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:16.333885 systemd-journald[217]: Collecting audit messages is disabled. Jan 30 14:10:16.333905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:16.333914 systemd-journald[217]: Journal started Jan 30 14:10:16.333934 systemd-journald[217]: Runtime Journal (/run/log/journal/c79b69a079da40ebbd1f2678e2840922) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:10:16.331238 systemd-modules-load[218]: Inserted module 'overlay' Jan 30 14:10:16.365609 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:16.365657 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:16.365677 kernel: Bridge firewalling registered Jan 30 14:10:16.368627 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 30 14:10:16.379214 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:16.385170 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:16.396948 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:16.407387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:16.416819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:16.440929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:16.448919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:16.471218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:16.480774 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:16.507217 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:16.514363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:16.526979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:16.538757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:16.564887 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:16.578361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:16.598679 dracut-cmdline[250]: dracut-dracut-053 Jan 30 14:10:16.603065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:16.632003 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:16.623293 systemd-resolved[252]: Positive Trust Anchors: Jan 30 14:10:16.623303 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:16.623335 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:16.624497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:16.626826 systemd-resolved[252]: Defaulting to hostname 'linux'. Jan 30 14:10:16.633251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:16.670045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:16.787603 kernel: SCSI subsystem initialized Jan 30 14:10:16.795613 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:16.805606 kernel: iscsi: registered transport (tcp) Jan 30 14:10:16.823699 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:16.823721 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:16.857512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:16.871779 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:16.904524 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:16.904635 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:16.910913 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:16.960606 kernel: raid6: neonx8 gen() 15770 MB/s Jan 30 14:10:16.980588 kernel: raid6: neonx4 gen() 15657 MB/s Jan 30 14:10:17.000592 kernel: raid6: neonx2 gen() 13243 MB/s Jan 30 14:10:17.021593 kernel: raid6: neonx1 gen() 10486 MB/s Jan 30 14:10:17.041587 kernel: raid6: int64x8 gen() 6960 MB/s Jan 30 14:10:17.061587 kernel: raid6: int64x4 gen() 7334 MB/s Jan 30 14:10:17.082593 kernel: raid6: int64x2 gen() 6117 MB/s Jan 30 14:10:17.107260 kernel: raid6: int64x1 gen() 5062 MB/s Jan 30 14:10:17.107281 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Jan 30 14:10:17.130591 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Jan 30 14:10:17.130618 kernel: raid6: using neon recovery algorithm Jan 30 14:10:17.139590 kernel: xor: measuring software checksum speed Jan 30 14:10:17.146058 kernel: 8regs : 18620 MB/sec Jan 30 14:10:17.146080 kernel: 32regs : 19631 MB/sec Jan 30 14:10:17.149861 kernel: arm64_neon : 26927 MB/sec Jan 30 14:10:17.153909 kernel: xor: using function: arm64_neon (26927 MB/sec) Jan 30 14:10:17.204637 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:17.215120 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:17.231717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:17.253765 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 30 14:10:17.259985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:17.276845 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:17.292791 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Jan 30 14:10:17.319196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:17.333789 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:17.374134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:17.394880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:17.419353 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:17.430191 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:17.447522 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:17.465306 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:17.487860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:17.514024 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 14:10:17.521473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:17.556613 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 14:10:17.556636 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 14:10:17.556647 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 14:10:17.556656 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 14:10:17.554190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:17.585733 kernel: scsi host0: storvsc_host_t Jan 30 14:10:17.586124 kernel: scsi host1: storvsc_host_t Jan 30 14:10:17.586362 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 14:10:17.554314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:17.641505 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 14:10:17.648747 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:10:17.648768 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 14:10:17.648778 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:10:17.648787 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 14:10:17.648797 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 14:10:17.648890 kernel: PTP clock support registered Jan 30 14:10:17.582183 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:17.671494 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 14:10:17.671516 kernel: hv_vmbus: registering driver hv_utils Jan 30 14:10:17.616970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:17.617143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:18.190541 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 14:10:18.190565 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 14:10:18.190577 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 14:10:18.190586 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: VF slot 1 added Jan 30 14:10:17.632212 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:18.214321 kernel: hv_vmbus: registering driver hv_pci Jan 30 14:10:18.214344 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 30 14:10:18.240269 kernel: hv_pci 0a7060bb-c597-4b20-ac92-51f00a7b302a: PCI VMBus probing: Using version 0x10004 Jan 30 14:10:18.354595 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:10:18.354612 kernel: hv_pci 0a7060bb-c597-4b20-ac92-51f00a7b302a: PCI host bridge to bus c597:00 Jan 30 14:10:18.354725 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 30 14:10:18.354952 kernel: pci_bus c597:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 14:10:18.355053 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 14:10:18.355146 kernel: pci_bus c597:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 14:10:18.355222 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:10:18.355303 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 14:10:18.355383 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 14:10:18.355466 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 14:10:18.355547 kernel: pci c597:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 14:10:18.355641 kernel: pci c597:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:18.355722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:18.355732 kernel: pci c597:00:02.0: enabling Extended Tags Jan 30 14:10:18.355830 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 14:10:18.355917 kernel: pci c597:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c597:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 14:10:18.356004 kernel: pci_bus c597:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 14:10:18.356086 kernel: pci c597:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:17.670261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:18.181891 systemd-resolved[252]: Clock change detected. Flushing caches. Jan 30 14:10:18.184038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:18.184141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:18.219954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:18.240628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:18.261000 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:18.332815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:18.417920 kernel: mlx5_core c597:00:02.0: enabling device (0000 -> 0002) Jan 30 14:10:18.636888 kernel: mlx5_core c597:00:02.0: firmware version: 16.30.1284 Jan 30 14:10:18.637028 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: VF registering: eth1 Jan 30 14:10:18.637118 kernel: mlx5_core c597:00:02.0 eth1: joined to eth0 Jan 30 14:10:18.637214 kernel: mlx5_core c597:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 14:10:18.644782 kernel: mlx5_core c597:00:02.0 enP50583s1: renamed from eth1 Jan 30 14:10:18.807583 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 14:10:18.881788 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (488) Jan 30 14:10:18.895857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:10:18.914122 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 14:10:18.943781 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (503) Jan 30 14:10:18.957169 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 14:10:18.964146 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 14:10:18.992990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:19.018789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:19.027812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:19.035863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:20.036853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:20.037707 disk-uuid[601]: The operation has completed successfully. Jan 30 14:10:20.099676 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:20.099791 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:20.128965 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:20.142619 sh[714]: Success Jan 30 14:10:20.172005 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:10:20.362202 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:20.368580 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:20.382883 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:20.412858 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:10:20.412940 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:20.419627 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:20.424604 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:20.428824 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:20.674470 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:20.679847 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:20.701046 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:20.709946 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:20.746670 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:20.746721 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:20.752255 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:20.771826 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:20.781191 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:10:20.795557 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:20.805422 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:20.822018 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:20.866157 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:20.886895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:20.914332 systemd-networkd[898]: lo: Link UP Jan 30 14:10:20.914345 systemd-networkd[898]: lo: Gained carrier Jan 30 14:10:20.915924 systemd-networkd[898]: Enumeration completed Jan 30 14:10:20.918301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:20.918591 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:20.918595 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:20.924503 systemd[1]: Reached target network.target - Network. Jan 30 14:10:21.014795 kernel: mlx5_core c597:00:02.0 enP50583s1: Link up Jan 30 14:10:21.055842 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: Data path switched to VF: enP50583s1 Jan 30 14:10:21.055483 systemd-networkd[898]: enP50583s1: Link UP Jan 30 14:10:21.055568 systemd-networkd[898]: eth0: Link UP Jan 30 14:10:21.055718 systemd-networkd[898]: eth0: Gained carrier Jan 30 14:10:21.055727 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:21.080022 systemd-networkd[898]: enP50583s1: Gained carrier Jan 30 14:10:21.093799 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:10:21.739202 ignition[849]: Ignition 2.19.0 Jan 30 14:10:21.739213 ignition[849]: Stage: fetch-offline Jan 30 14:10:21.741981 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:21.739271 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.754060 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:10:21.739280 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:21.739375 ignition[849]: parsed url from cmdline: "" Jan 30 14:10:21.739378 ignition[849]: no config URL provided Jan 30 14:10:21.739382 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:21.739392 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:21.739397 ignition[849]: failed to fetch config: resource requires networking Jan 30 14:10:21.740383 ignition[849]: Ignition finished successfully Jan 30 14:10:21.790790 ignition[906]: Ignition 2.19.0 Jan 30 14:10:21.790797 ignition[906]: Stage: fetch Jan 30 14:10:21.791047 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.791061 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:21.791177 ignition[906]: parsed url from cmdline: "" Jan 30 14:10:21.791180 ignition[906]: no config URL provided Jan 30 14:10:21.791188 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:21.791195 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:21.791219 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 14:10:21.876251 ignition[906]: GET result: OK Jan 30 14:10:21.876343 ignition[906]: config has been read from IMDS userdata Jan 30 14:10:21.876406 ignition[906]: parsing config with SHA512: 5518183f8b8f3396688896f26874c4ed7a3432f68127c167446f677493ea63caac6827c501c7780d61fbd01ced501617d7b6078ff63989c172a525071fc21c9a Jan 30 14:10:21.880828 unknown[906]: fetched base config from "system" Jan 30 14:10:21.881440 ignition[906]: fetch: fetch complete Jan 30 14:10:21.880836 unknown[906]: fetched base config from "system" Jan 30 14:10:21.881445 ignition[906]: fetch: fetch passed Jan 30 14:10:21.880956 unknown[906]: fetched user config from "azure" Jan 30 14:10:21.881501 ignition[906]: Ignition finished successfully Jan 30 14:10:21.882798 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:10:21.901001 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:21.918407 ignition[912]: Ignition 2.19.0 Jan 30 14:10:21.923099 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:21.918413 ignition[912]: Stage: kargs Jan 30 14:10:21.918630 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.946930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:21.918639 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:21.919710 ignition[912]: kargs: kargs passed Jan 30 14:10:21.919773 ignition[912]: Ignition finished successfully Jan 30 14:10:21.970570 ignition[919]: Ignition 2.19.0 Jan 30 14:10:21.975368 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:21.970577 ignition[919]: Stage: disks Jan 30 14:10:21.983327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:21.970786 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.992057 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:21.970796 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:22.004071 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:21.971740 ignition[919]: disks: disks passed Jan 30 14:10:22.012391 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:21.971795 ignition[919]: Ignition finished successfully Jan 30 14:10:22.023662 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:22.053007 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:22.124727 systemd-fsck[927]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 14:10:22.129195 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:22.150992 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:22.206792 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:22.206660 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:22.211420 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:22.252846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:22.262733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:22.276972 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:22.289556 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:22.334664 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (938) Jan 30 14:10:22.334689 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:22.334700 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:22.334709 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:22.289601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:22.331613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:22.342972 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:22.367780 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:22.369855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:22.547942 systemd-networkd[898]: enP50583s1: Gained IPv6LL Jan 30 14:10:22.707646 coreos-metadata[940]: Jan 30 14:10:22.707 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:10:22.715514 coreos-metadata[940]: Jan 30 14:10:22.715 INFO Fetch successful Jan 30 14:10:22.720636 coreos-metadata[940]: Jan 30 14:10:22.715 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:10:22.731782 coreos-metadata[940]: Jan 30 14:10:22.731 INFO Fetch successful Jan 30 14:10:22.745812 coreos-metadata[940]: Jan 30 14:10:22.745 INFO wrote hostname ci-4081.3.0-a-eeb23789ea to /sysroot/etc/hostname Jan 30 14:10:22.754798 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:22.803874 systemd-networkd[898]: eth0: Gained IPv6LL Jan 30 14:10:23.022288 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:23.077382 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:23.084345 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:23.090620 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:24.015253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:24.032087 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:24.045057 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:24.064616 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:24.058394 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:24.089567 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:24.101250 ignition[1056]: INFO : Ignition 2.19.0 Jan 30 14:10:24.101250 ignition[1056]: INFO : Stage: mount Jan 30 14:10:24.101250 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:24.101250 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:24.101250 ignition[1056]: INFO : mount: mount passed Jan 30 14:10:24.101250 ignition[1056]: INFO : Ignition finished successfully Jan 30 14:10:24.097090 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:24.118841 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:24.134004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:24.179471 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1068) Jan 30 14:10:24.179516 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:24.186593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:24.195521 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:24.202776 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:24.204428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:24.234782 ignition[1085]: INFO : Ignition 2.19.0 Jan 30 14:10:24.234782 ignition[1085]: INFO : Stage: files Jan 30 14:10:24.234782 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:24.234782 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:24.254247 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:24.269040 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:24.269040 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:24.335129 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:24.342735 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:24.342735 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:24.335582 unknown[1085]: wrote ssh authorized keys file for user: core Jan 30 14:10:24.364790 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:10:24.375018 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:10:24.375018 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:24.375018 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:10:24.576727 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:10:24.763004 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:24.763004 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:10:25.124208 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 14:10:25.338713 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:25.338713 ignition[1085]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:25.374072 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:25.506444 ignition[1085]: INFO : files: files passed Jan 30 14:10:25.506444 ignition[1085]: INFO : Ignition finished successfully Jan 30 14:10:25.434114 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:25.450945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:25.485872 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:25.553956 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:25.553956 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:25.485966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:25.586029 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:25.524995 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:25.533572 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:25.555055 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:25.611512 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:25.611643 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:25.624368 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:25.635296 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:25.648310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:25.670013 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:25.690832 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:25.710097 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:25.729168 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:25.729286 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:25.743397 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:25.756196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:25.769348 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:25.780962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:25.781037 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:25.797416 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:25.803715 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:25.815156 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:25.827814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:25.840085 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:25.852980 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:25.865267 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:25.878134 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:25.889475 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:25.901869 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:25.913673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:25.913750 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:25.929747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:25.941955 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:25.955844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:25.961975 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:25.969228 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:25.969300 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:25.987312 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:25.987365 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:25.995215 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:25.995267 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:26.008544 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:26.078930 ignition[1138]: INFO : Ignition 2.19.0 Jan 30 14:10:26.078930 ignition[1138]: INFO : Stage: umount Jan 30 14:10:26.078930 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:26.078930 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:26.078930 ignition[1138]: INFO : umount: umount passed Jan 30 14:10:26.078930 ignition[1138]: INFO : Ignition finished successfully Jan 30 14:10:26.008584 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:26.040890 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:26.046308 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:26.046374 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:26.081892 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:26.096021 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:26.096095 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:26.108374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:26.108432 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:26.123208 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:26.123674 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:26.124095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:26.136222 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:26.136283 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:26.145891 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:26.145952 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:26.154151 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:10:26.154197 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:10:26.169966 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:26.181257 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:26.181324 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:26.194513 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:26.207099 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:26.217906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:26.225072 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:26.244516 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:26.256072 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:26.256120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:26.261580 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:26.261624 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:26.271973 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:26.272024 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:26.283082 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:26.283127 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:26.295450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:26.306942 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:26.317804 systemd-networkd[898]: eth0: DHCPv6 lease lost Jan 30 14:10:26.540388 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: Data path switched from VF: enP50583s1 Jan 30 14:10:26.324706 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:26.324902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:26.335793 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:26.335897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:26.350198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:26.350256 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:26.377981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:26.388660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:26.388739 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:26.403174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:26.403235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:26.419852 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:26.419907 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:26.431407 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:26.431454 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:26.443841 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:26.487659 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:26.487866 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:26.505472 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:26.505525 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:26.517123 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:26.517162 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:26.536087 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:26.536154 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:26.552468 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:26.552541 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:26.570296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:26.570368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:26.616989 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:26.632253 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:26.632332 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:26.646689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:26.646748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:26.660273 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:26.660383 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:26.674889 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:26.674995 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:26.704155 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:26.704299 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:26.876787 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:26.714466 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:26.728024 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:26.728091 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:26.756698 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:26.820872 systemd[1]: Switching root. Jan 30 14:10:26.906000 systemd-journald[217]: Journal stopped Jan 30 14:10:16.331318 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:10:16.331340 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:10:16.331348 kernel: KASLR enabled Jan 30 14:10:16.331354 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 14:10:16.331361 kernel: printk: bootconsole [pl11] enabled Jan 30 14:10:16.331367 kernel: efi: EFI v2.7 by EDK II Jan 30 14:10:16.331374 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 30 14:10:16.331380 kernel: random: crng init done Jan 30 14:10:16.331386 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:16.331392 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 14:10:16.331398 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331404 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331412 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 14:10:16.331419 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331426 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331433 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331439 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331447 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331454 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331460 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 14:10:16.331467 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:10:16.331473 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 14:10:16.331479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 14:10:16.331486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 14:10:16.331492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 14:10:16.331499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 14:10:16.331505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 14:10:16.331512 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 14:10:16.331519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 14:10:16.331526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 14:10:16.331532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 14:10:16.331539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 14:10:16.331545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 14:10:16.331552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 14:10:16.331558 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 14:10:16.331564 kernel: Zone ranges: Jan 30 14:10:16.331571 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 14:10:16.331696 kernel: DMA32 empty Jan 30 14:10:16.331703 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:16.331710 kernel: Movable zone start for each node Jan 30 14:10:16.331722 kernel: Early memory node ranges Jan 30 14:10:16.331729 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 14:10:16.331735 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 30 14:10:16.331742 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 14:10:16.331749 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 14:10:16.331757 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 14:10:16.331764 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 14:10:16.331771 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:10:16.331778 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 14:10:16.331784 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 14:10:16.331791 kernel: psci: probing for conduit method from ACPI. Jan 30 14:10:16.331798 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:10:16.331805 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:10:16.331811 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 14:10:16.331818 kernel: psci: SMC Calling Convention v1.4 Jan 30 14:10:16.331825 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 14:10:16.331831 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 14:10:16.331840 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:10:16.331847 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:10:16.331854 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:10:16.331860 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:10:16.331867 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:10:16.331874 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:10:16.331881 kernel: CPU features: detected: Spectre-BHB Jan 30 14:10:16.331888 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:10:16.331894 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:10:16.331901 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:10:16.331908 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 14:10:16.331916 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:10:16.331923 kernel: alternatives: applying boot alternatives Jan 30 14:10:16.331931 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:16.331939 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:16.331946 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:10:16.331953 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:16.331960 kernel: Fallback order for Node 0: 0 Jan 30 14:10:16.331966 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 14:10:16.331973 kernel: Policy zone: Normal Jan 30 14:10:16.331980 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:16.331986 kernel: software IO TLB: area num 2. Jan 30 14:10:16.331995 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 30 14:10:16.332002 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 30 14:10:16.332009 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:10:16.332016 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:16.332023 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:16.332030 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:10:16.332037 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:16.332044 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:16.332050 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:16.332057 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:10:16.332064 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:10:16.332072 kernel: GICv3: 960 SPIs implemented Jan 30 14:10:16.332079 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:10:16.332086 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:10:16.332093 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:10:16.332099 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 14:10:16.332106 kernel: ITS: No ITS available, not enabling LPIs Jan 30 14:10:16.332113 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:16.332120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:16.332127 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:10:16.332134 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:10:16.332141 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:10:16.332149 kernel: Console: colour dummy device 80x25 Jan 30 14:10:16.332156 kernel: printk: console [tty1] enabled Jan 30 14:10:16.332163 kernel: ACPI: Core revision 20230628 Jan 30 14:10:16.332170 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:10:16.332178 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:16.332184 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:16.332191 kernel: landlock: Up and running. Jan 30 14:10:16.332198 kernel: SELinux: Initializing. Jan 30 14:10:16.332205 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332212 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332221 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:16.332228 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:16.332243 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 14:10:16.332259 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 14:10:16.332267 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 14:10:16.332274 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:16.332281 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:16.332295 kernel: Remapping and enabling EFI services. Jan 30 14:10:16.332303 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:16.332310 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:10:16.332317 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 14:10:16.332326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:16.332334 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:10:16.332341 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:10:16.332348 kernel: SMP: Total of 2 processors activated. Jan 30 14:10:16.332356 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:10:16.332365 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 14:10:16.332372 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:10:16.332379 kernel: CPU features: detected: CRC32 instructions Jan 30 14:10:16.332387 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:10:16.332394 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:10:16.332401 kernel: CPU features: detected: Privileged Access Never Jan 30 14:10:16.332409 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:10:16.332416 kernel: alternatives: applying system-wide alternatives Jan 30 14:10:16.332423 kernel: devtmpfs: initialized Jan 30 14:10:16.332432 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:16.332439 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:10:16.332447 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:16.332454 kernel: SMBIOS 3.1.0 present. Jan 30 14:10:16.332461 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 14:10:16.332468 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:16.332476 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:10:16.332483 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:10:16.332491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:10:16.332499 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:16.332507 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:16.332514 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:16.332521 kernel: cpuidle: using governor menu Jan 30 14:10:16.332529 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:10:16.332536 kernel: ASID allocator initialised with 32768 entries Jan 30 14:10:16.332544 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:16.332551 kernel: Serial: AMBA PL011 UART driver Jan 30 14:10:16.332559 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:10:16.332568 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:10:16.332581 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:10:16.332598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:16.332605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:16.332613 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:10:16.332620 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:10:16.332627 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:16.332635 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:16.332642 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:10:16.332651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:10:16.332659 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:16.332666 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:16.332673 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:16.332680 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:16.332688 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:10:16.332695 kernel: ACPI: Interpreter enabled Jan 30 14:10:16.332702 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:10:16.332710 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:10:16.332719 kernel: printk: console [ttyAMA0] enabled Jan 30 14:10:16.332726 kernel: printk: bootconsole [pl11] disabled Jan 30 14:10:16.332733 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 14:10:16.332741 kernel: iommu: Default domain type: Translated Jan 30 14:10:16.332748 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:10:16.332755 kernel: efivars: Registered efivars operations Jan 30 14:10:16.332763 kernel: vgaarb: loaded Jan 30 14:10:16.332770 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:10:16.332777 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:16.332786 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:16.332793 kernel: pnp: PnP ACPI init Jan 30 14:10:16.332801 kernel: pnp: PnP ACPI: found 0 devices Jan 30 14:10:16.332808 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:16.332815 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:16.332823 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:10:16.332830 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:16.332838 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:10:16.332845 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:10:16.332854 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:10:16.332861 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332869 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:16.332876 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:16.332883 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:10:16.332891 kernel: kvm [1]: HYP mode not available Jan 30 14:10:16.332898 kernel: Initialise system trusted keyrings Jan 30 14:10:16.332905 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:10:16.332912 kernel: Key type asymmetric registered Jan 30 14:10:16.332921 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:16.332928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:10:16.332936 kernel: io scheduler mq-deadline registered Jan 30 14:10:16.332943 kernel: io scheduler kyber registered Jan 30 14:10:16.332950 kernel: io scheduler bfq registered Jan 30 14:10:16.332958 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:16.332965 kernel: thunder_xcv, ver 1.0 Jan 30 14:10:16.332972 kernel: thunder_bgx, ver 1.0 Jan 30 14:10:16.332980 kernel: nicpf, ver 1.0 Jan 30 14:10:16.332987 kernel: nicvf, ver 1.0 Jan 30 14:10:16.333151 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:10:16.333225 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:10:15 UTC (1738246215) Jan 30 14:10:16.333235 kernel: efifb: probing for efifb Jan 30 14:10:16.333243 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 14:10:16.333250 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 14:10:16.333257 kernel: efifb: scrolling: redraw Jan 30 14:10:16.333265 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 14:10:16.333275 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:16.333282 kernel: fb0: EFI VGA frame buffer device Jan 30 14:10:16.333290 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 14:10:16.333297 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:16.333305 kernel: No ACPI PMU IRQ for CPU0 Jan 30 14:10:16.333312 kernel: No ACPI PMU IRQ for CPU1 Jan 30 14:10:16.333319 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 14:10:16.333326 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:10:16.333334 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:10:16.333343 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:16.333350 kernel: Segment Routing with IPv6 Jan 30 14:10:16.333357 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:16.333365 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:16.333372 kernel: Key type dns_resolver registered Jan 30 14:10:16.333379 kernel: registered taskstats version 1 Jan 30 14:10:16.333387 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:16.333394 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:10:16.333401 kernel: Key type .fscrypt registered Jan 30 14:10:16.333410 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:16.333418 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:10:16.333425 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:16.333433 kernel: ima: No architecture policies found Jan 30 14:10:16.333440 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:10:16.333447 kernel: clk: Disabling unused clocks Jan 30 14:10:16.333455 kernel: Freeing unused kernel memory: 39360K Jan 30 14:10:16.333462 kernel: Run /init as init process Jan 30 14:10:16.333469 kernel: with arguments: Jan 30 14:10:16.333478 kernel: /init Jan 30 14:10:16.333485 kernel: with environment: Jan 30 14:10:16.333492 kernel: HOME=/ Jan 30 14:10:16.333499 kernel: TERM=linux Jan 30 14:10:16.333506 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:16.333515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:16.333525 systemd[1]: Detected virtualization microsoft. Jan 30 14:10:16.333533 systemd[1]: Detected architecture arm64. Jan 30 14:10:16.333542 systemd[1]: Running in initrd. Jan 30 14:10:16.333549 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:16.333557 systemd[1]: Hostname set to . Jan 30 14:10:16.333565 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:16.333573 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:16.333635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:16.333644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:16.333652 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:16.333663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:16.333671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:16.333679 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:16.333688 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:16.333697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:16.333705 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:16.333714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:16.333722 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:16.333730 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:16.333738 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:16.333745 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:16.333753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:16.333761 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:16.333769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:16.333777 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:16.333787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:16.333795 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:16.333803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:16.333810 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:16.333818 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:16.333827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:16.333834 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:16.333842 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:16.333850 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:16.333859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:16.333885 systemd-journald[217]: Collecting audit messages is disabled. Jan 30 14:10:16.333905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:16.333914 systemd-journald[217]: Journal started Jan 30 14:10:16.333934 systemd-journald[217]: Runtime Journal (/run/log/journal/c79b69a079da40ebbd1f2678e2840922) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:10:16.331238 systemd-modules-load[218]: Inserted module 'overlay' Jan 30 14:10:16.365609 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:16.365657 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:16.365677 kernel: Bridge firewalling registered Jan 30 14:10:16.368627 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 30 14:10:16.379214 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:16.385170 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:16.396948 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:16.407387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:16.416819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:16.440929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:16.448919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:16.471218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:16.480774 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:16.507217 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:16.514363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:16.526979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:16.538757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:16.564887 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:16.578361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:16.598679 dracut-cmdline[250]: dracut-dracut-053 Jan 30 14:10:16.603065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:16.632003 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:16.623293 systemd-resolved[252]: Positive Trust Anchors: Jan 30 14:10:16.623303 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:16.623335 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:16.624497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:16.626826 systemd-resolved[252]: Defaulting to hostname 'linux'. Jan 30 14:10:16.633251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:16.670045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:16.787603 kernel: SCSI subsystem initialized Jan 30 14:10:16.795613 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:16.805606 kernel: iscsi: registered transport (tcp) Jan 30 14:10:16.823699 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:16.823721 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:16.857512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:16.871779 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:16.904524 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:16.904635 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:16.910913 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:16.960606 kernel: raid6: neonx8 gen() 15770 MB/s Jan 30 14:10:16.980588 kernel: raid6: neonx4 gen() 15657 MB/s Jan 30 14:10:17.000592 kernel: raid6: neonx2 gen() 13243 MB/s Jan 30 14:10:17.021593 kernel: raid6: neonx1 gen() 10486 MB/s Jan 30 14:10:17.041587 kernel: raid6: int64x8 gen() 6960 MB/s Jan 30 14:10:17.061587 kernel: raid6: int64x4 gen() 7334 MB/s Jan 30 14:10:17.082593 kernel: raid6: int64x2 gen() 6117 MB/s Jan 30 14:10:17.107260 kernel: raid6: int64x1 gen() 5062 MB/s Jan 30 14:10:17.107281 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Jan 30 14:10:17.130591 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Jan 30 14:10:17.130618 kernel: raid6: using neon recovery algorithm Jan 30 14:10:17.139590 kernel: xor: measuring software checksum speed Jan 30 14:10:17.146058 kernel: 8regs : 18620 MB/sec Jan 30 14:10:17.146080 kernel: 32regs : 19631 MB/sec Jan 30 14:10:17.149861 kernel: arm64_neon : 26927 MB/sec Jan 30 14:10:17.153909 kernel: xor: using function: arm64_neon (26927 MB/sec) Jan 30 14:10:17.204637 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:17.215120 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:17.231717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:17.253765 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 30 14:10:17.259985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:17.276845 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:17.292791 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Jan 30 14:10:17.319196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:17.333789 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:17.374134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:17.394880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:17.419353 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:17.430191 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:17.447522 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:17.465306 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:17.487860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:17.514024 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 14:10:17.521473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:17.556613 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 14:10:17.556636 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 14:10:17.556647 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 14:10:17.556656 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 14:10:17.554190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:17.585733 kernel: scsi host0: storvsc_host_t Jan 30 14:10:17.586124 kernel: scsi host1: storvsc_host_t Jan 30 14:10:17.586362 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 14:10:17.554314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:17.641505 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 14:10:17.648747 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:10:17.648768 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 14:10:17.648778 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:10:17.648787 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 14:10:17.648797 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 14:10:17.648890 kernel: PTP clock support registered Jan 30 14:10:17.582183 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:17.671494 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 14:10:17.671516 kernel: hv_vmbus: registering driver hv_utils Jan 30 14:10:17.616970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:17.617143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:18.190541 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 14:10:18.190565 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 14:10:18.190577 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 14:10:18.190586 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: VF slot 1 added Jan 30 14:10:17.632212 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:18.214321 kernel: hv_vmbus: registering driver hv_pci Jan 30 14:10:18.214344 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 30 14:10:18.240269 kernel: hv_pci 0a7060bb-c597-4b20-ac92-51f00a7b302a: PCI VMBus probing: Using version 0x10004 Jan 30 14:10:18.354595 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:10:18.354612 kernel: hv_pci 0a7060bb-c597-4b20-ac92-51f00a7b302a: PCI host bridge to bus c597:00 Jan 30 14:10:18.354725 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 30 14:10:18.354952 kernel: pci_bus c597:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 14:10:18.355053 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 14:10:18.355146 kernel: pci_bus c597:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 14:10:18.355222 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:10:18.355303 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 14:10:18.355383 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 14:10:18.355466 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 14:10:18.355547 kernel: pci c597:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 14:10:18.355641 kernel: pci c597:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:18.355722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:18.355732 kernel: pci c597:00:02.0: enabling Extended Tags Jan 30 14:10:18.355830 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 14:10:18.355917 kernel: pci c597:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c597:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 14:10:18.356004 kernel: pci_bus c597:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 14:10:18.356086 kernel: pci c597:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:10:17.670261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:18.181891 systemd-resolved[252]: Clock change detected. Flushing caches. Jan 30 14:10:18.184038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:18.184141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:18.219954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:18.240628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:18.261000 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:18.332815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:18.417920 kernel: mlx5_core c597:00:02.0: enabling device (0000 -> 0002) Jan 30 14:10:18.636888 kernel: mlx5_core c597:00:02.0: firmware version: 16.30.1284 Jan 30 14:10:18.637028 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: VF registering: eth1 Jan 30 14:10:18.637118 kernel: mlx5_core c597:00:02.0 eth1: joined to eth0 Jan 30 14:10:18.637214 kernel: mlx5_core c597:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 14:10:18.644782 kernel: mlx5_core c597:00:02.0 enP50583s1: renamed from eth1 Jan 30 14:10:18.807583 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 14:10:18.881788 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (488) Jan 30 14:10:18.895857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:10:18.914122 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 14:10:18.943781 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (503) Jan 30 14:10:18.957169 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 14:10:18.964146 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 14:10:18.992990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:19.018789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:19.027812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:19.035863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:20.036853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:20.037707 disk-uuid[601]: The operation has completed successfully. Jan 30 14:10:20.099676 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:20.099791 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:20.128965 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:20.142619 sh[714]: Success Jan 30 14:10:20.172005 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:10:20.362202 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:20.368580 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:20.382883 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:20.412858 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:10:20.412940 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:20.419627 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:20.424604 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:20.428824 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:20.674470 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:20.679847 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:20.701046 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:20.709946 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:20.746670 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:20.746721 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:20.752255 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:20.771826 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:20.781191 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:10:20.795557 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:20.805422 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:20.822018 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:20.866157 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:20.886895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:20.914332 systemd-networkd[898]: lo: Link UP Jan 30 14:10:20.914345 systemd-networkd[898]: lo: Gained carrier Jan 30 14:10:20.915924 systemd-networkd[898]: Enumeration completed Jan 30 14:10:20.918301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:20.918591 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:20.918595 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:20.924503 systemd[1]: Reached target network.target - Network. Jan 30 14:10:21.014795 kernel: mlx5_core c597:00:02.0 enP50583s1: Link up Jan 30 14:10:21.055842 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: Data path switched to VF: enP50583s1 Jan 30 14:10:21.055483 systemd-networkd[898]: enP50583s1: Link UP Jan 30 14:10:21.055568 systemd-networkd[898]: eth0: Link UP Jan 30 14:10:21.055718 systemd-networkd[898]: eth0: Gained carrier Jan 30 14:10:21.055727 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:21.080022 systemd-networkd[898]: enP50583s1: Gained carrier Jan 30 14:10:21.093799 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:10:21.739202 ignition[849]: Ignition 2.19.0 Jan 30 14:10:21.739213 ignition[849]: Stage: fetch-offline Jan 30 14:10:21.741981 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:21.739271 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.754060 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:10:21.739280 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:21.739375 ignition[849]: parsed url from cmdline: "" Jan 30 14:10:21.739378 ignition[849]: no config URL provided Jan 30 14:10:21.739382 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:21.739392 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:21.739397 ignition[849]: failed to fetch config: resource requires networking Jan 30 14:10:21.740383 ignition[849]: Ignition finished successfully Jan 30 14:10:21.790790 ignition[906]: Ignition 2.19.0 Jan 30 14:10:21.790797 ignition[906]: Stage: fetch Jan 30 14:10:21.791047 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.791061 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:21.791177 ignition[906]: parsed url from cmdline: "" Jan 30 14:10:21.791180 ignition[906]: no config URL provided Jan 30 14:10:21.791188 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:21.791195 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:21.791219 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 14:10:21.876251 ignition[906]: GET result: OK Jan 30 14:10:21.876343 ignition[906]: config has been read from IMDS userdata Jan 30 14:10:21.876406 ignition[906]: parsing config with SHA512: 5518183f8b8f3396688896f26874c4ed7a3432f68127c167446f677493ea63caac6827c501c7780d61fbd01ced501617d7b6078ff63989c172a525071fc21c9a Jan 30 14:10:21.880828 unknown[906]: fetched base config from "system" Jan 30 14:10:21.881440 ignition[906]: fetch: fetch complete Jan 30 14:10:21.880836 unknown[906]: fetched base config from "system" Jan 30 14:10:21.881445 ignition[906]: fetch: fetch passed Jan 30 14:10:21.880956 unknown[906]: fetched user config from "azure" Jan 30 14:10:21.881501 ignition[906]: Ignition finished successfully Jan 30 14:10:21.882798 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:10:21.901001 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:21.918407 ignition[912]: Ignition 2.19.0 Jan 30 14:10:21.923099 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:21.918413 ignition[912]: Stage: kargs Jan 30 14:10:21.918630 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.946930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:21.918639 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:21.919710 ignition[912]: kargs: kargs passed Jan 30 14:10:21.919773 ignition[912]: Ignition finished successfully Jan 30 14:10:21.970570 ignition[919]: Ignition 2.19.0 Jan 30 14:10:21.975368 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:21.970577 ignition[919]: Stage: disks Jan 30 14:10:21.983327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:21.970786 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.992057 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:21.970796 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:22.004071 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:21.971740 ignition[919]: disks: disks passed Jan 30 14:10:22.012391 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:21.971795 ignition[919]: Ignition finished successfully Jan 30 14:10:22.023662 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:22.053007 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:22.124727 systemd-fsck[927]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 14:10:22.129195 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:22.150992 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:22.206792 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:22.206660 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:22.211420 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:22.252846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:22.262733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:22.276972 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:22.289556 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:22.334664 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (938) Jan 30 14:10:22.334689 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:22.334700 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:22.334709 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:22.289601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:22.331613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:22.342972 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:22.367780 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:22.369855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:22.547942 systemd-networkd[898]: enP50583s1: Gained IPv6LL Jan 30 14:10:22.707646 coreos-metadata[940]: Jan 30 14:10:22.707 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:10:22.715514 coreos-metadata[940]: Jan 30 14:10:22.715 INFO Fetch successful Jan 30 14:10:22.720636 coreos-metadata[940]: Jan 30 14:10:22.715 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:10:22.731782 coreos-metadata[940]: Jan 30 14:10:22.731 INFO Fetch successful Jan 30 14:10:22.745812 coreos-metadata[940]: Jan 30 14:10:22.745 INFO wrote hostname ci-4081.3.0-a-eeb23789ea to /sysroot/etc/hostname Jan 30 14:10:22.754798 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:22.803874 systemd-networkd[898]: eth0: Gained IPv6LL Jan 30 14:10:23.022288 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:23.077382 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:23.084345 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:23.090620 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:24.015253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:24.032087 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:24.045057 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:24.064616 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:24.058394 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:24.089567 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:24.101250 ignition[1056]: INFO : Ignition 2.19.0 Jan 30 14:10:24.101250 ignition[1056]: INFO : Stage: mount Jan 30 14:10:24.101250 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:24.101250 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:24.101250 ignition[1056]: INFO : mount: mount passed Jan 30 14:10:24.101250 ignition[1056]: INFO : Ignition finished successfully Jan 30 14:10:24.097090 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:24.118841 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:24.134004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:24.179471 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1068) Jan 30 14:10:24.179516 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:24.186593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:24.195521 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:24.202776 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:24.204428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:24.234782 ignition[1085]: INFO : Ignition 2.19.0 Jan 30 14:10:24.234782 ignition[1085]: INFO : Stage: files Jan 30 14:10:24.234782 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:24.234782 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:24.254247 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:24.269040 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:24.269040 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:24.335129 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:24.342735 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:24.342735 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:24.335582 unknown[1085]: wrote ssh authorized keys file for user: core Jan 30 14:10:24.364790 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:10:24.375018 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:10:24.375018 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:24.375018 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:10:24.576727 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:10:24.763004 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:24.763004 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:24.784509 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:10:25.124208 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 14:10:25.338713 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:25.338713 ignition[1085]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:25.359804 ignition[1085]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:25.374072 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:25.506444 ignition[1085]: INFO : files: files passed Jan 30 14:10:25.506444 ignition[1085]: INFO : Ignition finished successfully Jan 30 14:10:25.434114 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:25.450945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:25.485872 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:25.553956 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:25.553956 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:25.485966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:25.586029 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:25.524995 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:25.533572 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:25.555055 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:25.611512 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:25.611643 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:25.624368 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:25.635296 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:25.648310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:25.670013 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:25.690832 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:25.710097 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:25.729168 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:25.729286 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:25.743397 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:25.756196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:25.769348 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:25.780962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:25.781037 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:25.797416 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:25.803715 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:25.815156 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:25.827814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:25.840085 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:25.852980 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:25.865267 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:25.878134 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:25.889475 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:25.901869 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:25.913673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:25.913750 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:25.929747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:25.941955 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:25.955844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:25.961975 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:25.969228 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:25.969300 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:25.987312 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:25.987365 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:25.995215 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:25.995267 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:26.008544 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:26.078930 ignition[1138]: INFO : Ignition 2.19.0 Jan 30 14:10:26.078930 ignition[1138]: INFO : Stage: umount Jan 30 14:10:26.078930 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:26.078930 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:10:26.078930 ignition[1138]: INFO : umount: umount passed Jan 30 14:10:26.078930 ignition[1138]: INFO : Ignition finished successfully Jan 30 14:10:26.008584 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:26.040890 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:26.046308 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:26.046374 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:26.081892 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:26.096021 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:26.096095 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:26.108374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:26.108432 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:26.123208 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:26.123674 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:26.124095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:26.136222 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:26.136283 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:26.145891 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:26.145952 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:26.154151 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:10:26.154197 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:10:26.169966 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:26.181257 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:26.181324 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:26.194513 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:26.207099 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:26.217906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:26.225072 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:26.244516 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:26.256072 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:26.256120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:26.261580 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:26.261624 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:26.271973 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:26.272024 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:26.283082 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:26.283127 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:26.295450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:26.306942 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:26.317804 systemd-networkd[898]: eth0: DHCPv6 lease lost Jan 30 14:10:26.540388 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: Data path switched from VF: enP50583s1 Jan 30 14:10:26.324706 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:26.324902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:26.335793 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:26.335897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:26.350198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:26.350256 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:26.377981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:26.388660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:26.388739 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:26.403174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:26.403235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:26.419852 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:26.419907 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:26.431407 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:26.431454 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:26.443841 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:26.487659 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:26.487866 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:26.505472 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:26.505525 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:26.517123 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:26.517162 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:26.536087 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:26.536154 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:26.552468 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:26.552541 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:26.570296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:26.570368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:26.616989 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:26.632253 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:26.632332 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:26.646689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:26.646748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:26.660273 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:26.660383 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:26.674889 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:26.674995 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:26.704155 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:26.704299 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:26.876787 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:26.714466 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:26.728024 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:26.728091 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:26.756698 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:26.820872 systemd[1]: Switching root. Jan 30 14:10:26.906000 systemd-journald[217]: Journal stopped Jan 30 14:10:30.899201 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:10:30.899224 kernel: SELinux: policy capability open_perms=1 Jan 30 14:10:30.899234 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:10:30.899242 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:10:30.899251 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:10:30.899259 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:10:30.899268 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:10:30.899276 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:10:30.899284 kernel: audit: type=1403 audit(1738246228.255:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:10:30.899294 systemd[1]: Successfully loaded SELinux policy in 111.882ms. Jan 30 14:10:30.899305 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.967ms. Jan 30 14:10:30.899315 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:30.899324 systemd[1]: Detected virtualization microsoft. Jan 30 14:10:30.899333 systemd[1]: Detected architecture arm64. Jan 30 14:10:30.899342 systemd[1]: Detected first boot. Jan 30 14:10:30.899353 systemd[1]: Hostname set to . Jan 30 14:10:30.899362 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:30.899371 zram_generator::config[1196]: No configuration found. Jan 30 14:10:30.899382 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:10:30.899391 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:10:30.899400 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:10:30.899410 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:10:30.899420 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:10:30.899430 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:10:30.899440 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:10:30.899449 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:10:30.899458 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:10:30.899468 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:10:30.899477 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:10:30.899487 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:30.899497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:30.899506 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:10:30.899515 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:10:30.899525 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:10:30.899534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:30.899543 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 14:10:30.899552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:30.899563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:10:30.899572 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:30.899583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:30.899594 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:30.899604 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:30.899613 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:10:30.899623 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:10:30.899632 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:30.899643 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:30.899653 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:30.899662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:30.899672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:30.899681 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:10:30.899693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:10:30.899703 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:10:30.899713 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:10:30.899723 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:10:30.899732 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:10:30.899742 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:10:30.899751 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:10:30.899773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:30.899785 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:30.899795 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:10:30.899805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:30.899815 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:30.899825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:30.899834 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:10:30.899844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:30.899854 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:10:30.899866 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 14:10:30.899876 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 14:10:30.899885 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:30.899895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:30.899905 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:10:30.899914 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:10:30.899924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:30.899933 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:10:30.899944 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:10:30.899954 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:10:30.899977 systemd-journald[1292]: Collecting audit messages is disabled. Jan 30 14:10:30.899997 systemd-journald[1292]: Journal started Jan 30 14:10:30.900019 systemd-journald[1292]: Runtime Journal (/run/log/journal/1aec448ab9164e9fa0a488febe9a7293) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:10:30.923589 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:30.924863 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:10:30.933476 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:10:30.940954 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:10:30.943772 kernel: loop: module loaded Jan 30 14:10:30.950380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:30.957627 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:10:30.957854 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:10:30.965282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:30.965434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:30.972194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:30.972346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:30.980047 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:30.980196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:30.987028 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:10:30.994394 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:10:31.001601 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:31.014572 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:10:31.025862 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:10:31.032171 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:10:31.200002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:10:31.214256 kernel: fuse: init (API version 7.39) Jan 30 14:10:31.218042 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:10:31.225537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:31.227940 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:10:31.236941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:31.241950 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:31.257802 kernel: ACPI: bus type drm_connector registered Jan 30 14:10:31.261972 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:10:31.271223 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:10:31.278432 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:31.278679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:31.286541 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:10:31.286751 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:10:31.288125 systemd-journald[1292]: Time spent on flushing to /var/log/journal/1aec448ab9164e9fa0a488febe9a7293 is 12.382ms for 882 entries. Jan 30 14:10:31.288125 systemd-journald[1292]: System Journal (/var/log/journal/1aec448ab9164e9fa0a488febe9a7293) is 8.0M, max 2.6G, 2.6G free. Jan 30 14:10:31.955113 systemd-journald[1292]: Received client request to flush runtime journal. Jan 30 14:10:31.300266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:31.309624 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:10:31.321139 udevadm[1346]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:10:31.326833 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:10:31.334342 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:31.341368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:10:31.596314 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:10:31.604045 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:10:31.622022 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jan 30 14:10:31.622032 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jan 30 14:10:31.626449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:31.637940 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:10:31.961376 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:10:32.001265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:33.022892 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:10:33.034935 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:33.053366 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 30 14:10:33.053381 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 30 14:10:33.057421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:36.281466 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:10:36.293917 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:36.323923 systemd-udevd[1380]: Using default interface naming scheme 'v255'. Jan 30 14:10:37.052830 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:37.072970 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:37.152566 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 30 14:10:37.311862 kernel: hv_vmbus: registering driver hv_balloon Jan 30 14:10:37.322159 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 14:10:37.322240 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 30 14:10:37.359498 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:10:37.367884 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:10:37.380773 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 14:10:37.393515 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 14:10:37.393583 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 14:10:37.393893 kernel: Console: switching to colour dummy device 80x25 Jan 30 14:10:37.405974 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:37.421563 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:10:37.754465 systemd-networkd[1394]: lo: Link UP Jan 30 14:10:37.754472 systemd-networkd[1394]: lo: Gained carrier Jan 30 14:10:37.756258 systemd-networkd[1394]: Enumeration completed Jan 30 14:10:37.756394 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:37.756585 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:37.756594 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:37.770942 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:10:37.817783 kernel: mlx5_core c597:00:02.0 enP50583s1: Link up Jan 30 14:10:37.844779 kernel: hv_netvsc 000d3ac3-b18a-000d-3ac3-b18a000d3ac3 eth0: Data path switched to VF: enP50583s1 Jan 30 14:10:37.845466 systemd-networkd[1394]: enP50583s1: Link UP Jan 30 14:10:37.845559 systemd-networkd[1394]: eth0: Link UP Jan 30 14:10:37.845563 systemd-networkd[1394]: eth0: Gained carrier Jan 30 14:10:37.845577 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:37.854018 systemd-networkd[1394]: enP50583s1: Gained carrier Jan 30 14:10:37.862087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:37.869348 systemd-networkd[1394]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:10:37.872315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:37.872529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:37.894927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:37.911115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:37.911379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:37.924985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:38.139807 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1393) Jan 30 14:10:38.197187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:10:38.488075 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:10:38.499986 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:10:38.873468 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:39.202749 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:10:39.211503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:39.223908 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:10:39.235868 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:39.260304 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:10:39.269425 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:39.278244 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:10:39.278277 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:39.284309 systemd[1]: Reached target machines.target - Containers. Jan 30 14:10:39.290646 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:10:39.308938 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:10:39.316302 systemd-networkd[1394]: enP50583s1: Gained IPv6LL Jan 30 14:10:39.317846 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:10:39.324474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:39.325534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:10:39.333945 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:10:39.343278 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:10:39.457603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:39.552118 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:10:39.763851 systemd-networkd[1394]: eth0: Gained IPv6LL Jan 30 14:10:39.766860 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:10:40.056358 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:10:40.315772 kernel: loop0: detected capacity change from 0 to 114432 Jan 30 14:10:42.012780 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:10:42.086925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:10:42.087642 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:10:42.100854 kernel: loop1: detected capacity change from 0 to 31320 Jan 30 14:10:44.751782 kernel: loop2: detected capacity change from 0 to 194096 Jan 30 14:10:44.880781 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 14:10:46.538783 kernel: loop4: detected capacity change from 0 to 114432 Jan 30 14:10:46.548789 kernel: loop5: detected capacity change from 0 to 31320 Jan 30 14:10:46.557788 kernel: loop6: detected capacity change from 0 to 194096 Jan 30 14:10:46.568779 kernel: loop7: detected capacity change from 0 to 114328 Jan 30 14:10:46.571992 (sd-merge)[1508]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 14:10:46.572419 (sd-merge)[1508]: Merged extensions into '/usr'. Jan 30 14:10:46.575459 systemd[1]: Reloading requested from client PID 1486 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:10:46.575718 systemd[1]: Reloading... Jan 30 14:10:46.639800 zram_generator::config[1541]: No configuration found. Jan 30 14:10:46.803589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:46.874447 systemd[1]: Reloading finished in 298 ms. Jan 30 14:10:46.890659 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:10:46.910976 systemd[1]: Starting ensure-sysext.service... Jan 30 14:10:46.919998 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:46.930928 systemd[1]: Reloading requested from client PID 1596 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:10:46.930947 systemd[1]: Reloading... Jan 30 14:10:46.949089 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:10:46.949388 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:10:46.950177 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:10:46.950442 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 30 14:10:46.950490 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 30 14:10:46.956770 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:46.957367 systemd-tmpfiles[1597]: Skipping /boot Jan 30 14:10:46.965921 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:46.965935 systemd-tmpfiles[1597]: Skipping /boot Jan 30 14:10:47.005784 zram_generator::config[1623]: No configuration found. Jan 30 14:10:47.130596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:47.200858 systemd[1]: Reloading finished in 269 ms. Jan 30 14:10:47.213918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:47.231065 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:47.243025 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:10:47.256050 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:10:47.272992 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:47.283333 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:10:47.299338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:47.301214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:47.314678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:47.338059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:47.354033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:47.355109 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:10:47.363714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:47.363988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:47.372064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:47.372222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:47.381450 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:47.383111 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:47.399930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:47.400176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:47.402505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:47.410100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:47.418990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:47.431070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:47.439978 systemd-resolved[1700]: Positive Trust Anchors: Jan 30 14:10:47.439998 systemd-resolved[1700]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:47.446068 augenrules[1721]: No rules Jan 30 14:10:47.440031 systemd-resolved[1700]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:47.442305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:47.446710 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:47.455901 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:10:47.467186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:47.467411 systemd-resolved[1700]: Using system hostname 'ci-4081.3.0-a-eeb23789ea'. Jan 30 14:10:47.467521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:47.476019 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:47.484648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:47.485016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:47.493748 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:47.494038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:47.511808 systemd[1]: Reached target network.target - Network. Jan 30 14:10:47.518967 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:10:47.526859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:47.535665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:47.541007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:47.550669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:47.560047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:47.571041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:47.582038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:47.582254 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:10:47.590435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:47.590619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:47.599851 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:47.600008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:47.609171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:47.609805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:47.621231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:47.621878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:47.631077 systemd[1]: Finished ensure-sysext.service. Jan 30 14:10:47.641831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:47.641894 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:47.850652 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:10:47.860916 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:50.079770 ldconfig[1483]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:10:50.104518 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:10:50.118005 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:10:50.134119 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:10:50.142902 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:50.149002 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:10:50.157939 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:10:50.166339 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:10:50.173299 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:10:50.180721 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:10:50.188393 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:10:50.188433 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:50.193653 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:50.201827 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:10:50.210458 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:10:50.219402 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:10:50.228933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:10:50.236316 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:50.241658 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:50.248147 systemd[1]: System is tainted: cgroupsv1 Jan 30 14:10:50.248196 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:50.248214 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:50.254831 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 14:10:50.265904 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:10:50.282933 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:10:50.290092 (chronyd)[1766]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 14:10:50.298978 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:10:50.306407 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:10:50.325056 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:10:50.326186 chronyd[1776]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 14:10:50.336338 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:10:50.336383 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 14:10:50.337701 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 14:10:50.346844 chronyd[1776]: Timezone right/UTC failed leap second check, ignoring Jan 30 14:10:50.347035 chronyd[1776]: Loaded seccomp filter (level 2) Jan 30 14:10:50.347715 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 14:10:50.350054 jq[1773]: false Jan 30 14:10:50.350365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:50.356335 KVP[1777]: KVP starting; pid is:1777 Jan 30 14:10:50.363967 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:10:50.372961 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:10:50.385089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:10:50.395957 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:10:50.396277 KVP[1777]: KVP LIC Version: 3.1 Jan 30 14:10:50.396772 kernel: hv_utils: KVP IC version 4.0 Jan 30 14:10:50.411067 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:10:50.425320 extend-filesystems[1774]: Found loop4 Jan 30 14:10:50.425320 extend-filesystems[1774]: Found loop5 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found loop6 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found loop7 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda1 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda2 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda3 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found usr Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda4 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda6 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda7 Jan 30 14:10:50.437199 extend-filesystems[1774]: Found sda9 Jan 30 14:10:50.437199 extend-filesystems[1774]: Checking size of /dev/sda9 Jan 30 14:10:50.504352 dbus-daemon[1770]: [system] SELinux support is enabled Jan 30 14:10:50.446267 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:10:50.612622 extend-filesystems[1774]: Old size kept for /dev/sda9 Jan 30 14:10:50.612622 extend-filesystems[1774]: Found sr0 Jan 30 14:10:50.462817 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:10:50.464099 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:10:50.488851 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:10:50.646285 jq[1807]: true Jan 30 14:10:50.510841 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:10:50.544234 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 14:10:50.568101 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:10:50.568354 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:10:50.568624 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:10:50.568838 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:10:50.584136 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:10:50.584376 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:10:50.591018 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:10:50.608158 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:10:50.608390 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:10:50.650375 (ntainerd)[1828]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:10:50.653228 jq[1826]: true Jan 30 14:10:50.665615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:10:50.665657 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:10:50.685718 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:10:50.685746 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:10:50.714869 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1829) Jan 30 14:10:50.837354 coreos-metadata[1768]: Jan 30 14:10:50.837 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:10:50.841613 coreos-metadata[1768]: Jan 30 14:10:50.841 INFO Fetch successful Jan 30 14:10:50.841613 coreos-metadata[1768]: Jan 30 14:10:50.841 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 14:10:50.846884 coreos-metadata[1768]: Jan 30 14:10:50.846 INFO Fetch successful Jan 30 14:10:50.846884 coreos-metadata[1768]: Jan 30 14:10:50.846 INFO Fetching http://168.63.129.16/machine/678d03ee-7bbf-4774-9d96-f1df33c37e79/f3c124f0%2Dfb9c%2D43ee%2Da5d4%2D5d5be3f5e159.%5Fci%2D4081.3.0%2Da%2Deeb23789ea?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 14:10:50.849362 coreos-metadata[1768]: Jan 30 14:10:50.849 INFO Fetch successful Jan 30 14:10:50.850120 coreos-metadata[1768]: Jan 30 14:10:50.850 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:10:50.865377 coreos-metadata[1768]: Jan 30 14:10:50.865 INFO Fetch successful Jan 30 14:10:50.881581 update_engine[1803]: I20250130 14:10:50.881496 1803 main.cc:92] Flatcar Update Engine starting Jan 30 14:10:50.885478 systemd-logind[1798]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:10:50.886955 systemd-logind[1798]: New seat seat0. Jan 30 14:10:50.888321 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:10:50.898980 update_engine[1803]: I20250130 14:10:50.898911 1803 update_check_scheduler.cc:74] Next update check in 10m40s Jan 30 14:10:50.915955 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:10:50.923334 tar[1822]: linux-arm64/helm Jan 30 14:10:50.926415 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:10:50.938148 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:10:50.940007 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:10:50.946671 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:10:51.924508 sshd_keygen[1805]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:10:51.943574 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:10:51.956108 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:10:51.974967 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 14:10:51.988412 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:10:51.988663 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:10:52.000613 locksmithd[1896]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:10:52.009056 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:10:52.017282 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 14:10:52.376301 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:10:52.394111 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:10:52.402147 bash[1868]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:52.409407 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 14:10:52.418060 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:10:52.429393 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:10:52.439256 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 14:10:52.576587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:52.586786 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:52.606658 tar[1822]: linux-arm64/LICENSE Jan 30 14:10:52.606658 tar[1822]: linux-arm64/README.md Jan 30 14:10:52.618691 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:10:53.005684 kubelet[1943]: E0130 14:10:53.005575 1943 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:53.008098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:53.008283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:53.942260 containerd[1828]: time="2025-01-30T14:10:53.942171460Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:10:53.965851 containerd[1828]: time="2025-01-30T14:10:53.965783420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.967516860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.967555540Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.967573020Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.967740260Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.967776540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.967842540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.967855180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.968051180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.968065500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.968077980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968645 containerd[1828]: time="2025-01-30T14:10:53.968087660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968974 containerd[1828]: time="2025-01-30T14:10:53.968151220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968974 containerd[1828]: time="2025-01-30T14:10:53.968342700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968974 containerd[1828]: time="2025-01-30T14:10:53.968460420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:53.968974 containerd[1828]: time="2025-01-30T14:10:53.968474740Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:10:53.968974 containerd[1828]: time="2025-01-30T14:10:53.968541980Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:10:53.968974 containerd[1828]: time="2025-01-30T14:10:53.968577860Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:10:54.353546 containerd[1828]: time="2025-01-30T14:10:54.353447940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:10:54.353797 containerd[1828]: time="2025-01-30T14:10:54.353699540Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:10:54.353797 containerd[1828]: time="2025-01-30T14:10:54.353775460Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:10:54.353865 containerd[1828]: time="2025-01-30T14:10:54.353806300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:10:54.353865 containerd[1828]: time="2025-01-30T14:10:54.353823620Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:10:54.354009 containerd[1828]: time="2025-01-30T14:10:54.353985620Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:10:54.354959 containerd[1828]: time="2025-01-30T14:10:54.354928660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:10:54.355095 containerd[1828]: time="2025-01-30T14:10:54.355074020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:10:54.355121 containerd[1828]: time="2025-01-30T14:10:54.355094580Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:10:54.355121 containerd[1828]: time="2025-01-30T14:10:54.355109260Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:10:54.355166 containerd[1828]: time="2025-01-30T14:10:54.355123780Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355166 containerd[1828]: time="2025-01-30T14:10:54.355137340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355166 containerd[1828]: time="2025-01-30T14:10:54.355149620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355223 containerd[1828]: time="2025-01-30T14:10:54.355165100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355223 containerd[1828]: time="2025-01-30T14:10:54.355179620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355223 containerd[1828]: time="2025-01-30T14:10:54.355192140Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355223 containerd[1828]: time="2025-01-30T14:10:54.355211260Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355293 containerd[1828]: time="2025-01-30T14:10:54.355223380Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:10:54.355293 containerd[1828]: time="2025-01-30T14:10:54.355243500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355293 containerd[1828]: time="2025-01-30T14:10:54.355256940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355293 containerd[1828]: time="2025-01-30T14:10:54.355268220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355293 containerd[1828]: time="2025-01-30T14:10:54.355282660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355382 containerd[1828]: time="2025-01-30T14:10:54.355294500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355382 containerd[1828]: time="2025-01-30T14:10:54.355310220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355382 containerd[1828]: time="2025-01-30T14:10:54.355322340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355382 containerd[1828]: time="2025-01-30T14:10:54.355335980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355382 containerd[1828]: time="2025-01-30T14:10:54.355349220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355382 containerd[1828]: time="2025-01-30T14:10:54.355367900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355382 containerd[1828]: time="2025-01-30T14:10:54.355381060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355501 containerd[1828]: time="2025-01-30T14:10:54.355392460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355501 containerd[1828]: time="2025-01-30T14:10:54.355404580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355501 containerd[1828]: time="2025-01-30T14:10:54.355419740Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:10:54.355501 containerd[1828]: time="2025-01-30T14:10:54.355439580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355501 containerd[1828]: time="2025-01-30T14:10:54.355451940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355501 containerd[1828]: time="2025-01-30T14:10:54.355462300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355512980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355531340Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355542300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355554940Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355564660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355597060Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355607260Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:10:54.355710 containerd[1828]: time="2025-01-30T14:10:54.355617500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:10:54.355993 containerd[1828]: time="2025-01-30T14:10:54.355906380Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:10:54.355993 containerd[1828]: time="2025-01-30T14:10:54.355966300Z" level=info msg="Connect containerd service" Jan 30 14:10:54.356151 containerd[1828]: time="2025-01-30T14:10:54.355995340Z" level=info msg="using legacy CRI server" Jan 30 14:10:54.356151 containerd[1828]: time="2025-01-30T14:10:54.356002340Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:10:54.356151 containerd[1828]: time="2025-01-30T14:10:54.356089500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:10:54.356750 containerd[1828]: time="2025-01-30T14:10:54.356716300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.356891260Z" level=info msg="Start subscribing containerd event" Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.356952380Z" level=info msg="Start recovering state" Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.357018940Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.357020860Z" level=info msg="Start event monitor" Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.357051180Z" level=info msg="Start snapshots syncer" Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.357061340Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.357069340Z" level=info msg="Start streaming server" Jan 30 14:10:54.358667 containerd[1828]: time="2025-01-30T14:10:54.357055540Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:10:54.357293 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:10:54.364423 containerd[1828]: time="2025-01-30T14:10:54.364384500Z" level=info msg="containerd successfully booted in 0.424154s" Jan 30 14:10:54.364679 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:10:54.374826 systemd[1]: Startup finished in 12.605s (kernel) + 26.229s (userspace) = 38.834s. Jan 30 14:10:55.469961 login[1928]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 30 14:10:55.471864 login[1930]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:55.482519 systemd-logind[1798]: New session 2 of user core. Jan 30 14:10:55.484207 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:10:55.494010 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:10:55.506315 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:10:55.520421 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:10:55.662414 (systemd)[1972]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:10:55.774907 systemd[1972]: Queued start job for default target default.target. Jan 30 14:10:55.775882 systemd[1972]: Created slice app.slice - User Application Slice. Jan 30 14:10:55.775984 systemd[1972]: Reached target paths.target - Paths. Jan 30 14:10:55.775997 systemd[1972]: Reached target timers.target - Timers. Jan 30 14:10:55.785915 systemd[1972]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:10:55.792283 systemd[1972]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:10:55.792916 systemd[1972]: Reached target sockets.target - Sockets. Jan 30 14:10:55.793026 systemd[1972]: Reached target basic.target - Basic System. Jan 30 14:10:55.793273 systemd[1972]: Reached target default.target - Main User Target. Jan 30 14:10:55.793372 systemd[1972]: Startup finished in 125ms. Jan 30 14:10:55.793519 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:10:55.794699 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:10:56.470393 login[1928]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:56.474797 systemd-logind[1798]: New session 1 of user core. Jan 30 14:10:56.480068 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:10:58.991777 waagent[1924]: 2025-01-30T14:10:58.988384Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 14:10:58.994927 waagent[1924]: 2025-01-30T14:10:58.994859Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 30 14:10:59.000113 waagent[1924]: 2025-01-30T14:10:59.000061Z INFO Daemon Daemon Python: 3.11.9 Jan 30 14:10:59.005551 waagent[1924]: 2025-01-30T14:10:59.005366Z INFO Daemon Daemon Run daemon Jan 30 14:10:59.010057 waagent[1924]: 2025-01-30T14:10:59.010010Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 30 14:10:59.020311 waagent[1924]: 2025-01-30T14:10:59.020248Z INFO Daemon Daemon Using waagent for provisioning Jan 30 14:10:59.026551 waagent[1924]: 2025-01-30T14:10:59.026503Z INFO Daemon Daemon Activate resource disk Jan 30 14:10:59.041789 waagent[1924]: 2025-01-30T14:10:59.032250Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 14:10:59.045113 waagent[1924]: 2025-01-30T14:10:59.045036Z INFO Daemon Daemon Found device: None Jan 30 14:10:59.050475 waagent[1924]: 2025-01-30T14:10:59.050418Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 14:10:59.060253 waagent[1924]: 2025-01-30T14:10:59.060194Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 14:10:59.077182 waagent[1924]: 2025-01-30T14:10:59.077124Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 14:10:59.083751 waagent[1924]: 2025-01-30T14:10:59.083703Z INFO Daemon Daemon Running default provisioning handler Jan 30 14:10:59.096152 waagent[1924]: 2025-01-30T14:10:59.095477Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 14:10:59.112130 waagent[1924]: 2025-01-30T14:10:59.112065Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 14:10:59.123711 waagent[1924]: 2025-01-30T14:10:59.123651Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 14:10:59.129480 waagent[1924]: 2025-01-30T14:10:59.129423Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 14:10:59.767955 waagent[1924]: 2025-01-30T14:10:59.767852Z INFO Daemon Daemon Successfully mounted dvd Jan 30 14:10:59.784259 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 14:10:59.786229 waagent[1924]: 2025-01-30T14:10:59.786152Z INFO Daemon Daemon Detect protocol endpoint Jan 30 14:10:59.792016 waagent[1924]: 2025-01-30T14:10:59.791953Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 14:10:59.798517 waagent[1924]: 2025-01-30T14:10:59.798465Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 14:10:59.806492 waagent[1924]: 2025-01-30T14:10:59.806438Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 14:10:59.813051 waagent[1924]: 2025-01-30T14:10:59.812999Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 14:10:59.819002 waagent[1924]: 2025-01-30T14:10:59.818951Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 14:10:59.923839 waagent[1924]: 2025-01-30T14:10:59.923791Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 14:10:59.930608 waagent[1924]: 2025-01-30T14:10:59.930577Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 14:10:59.936299 waagent[1924]: 2025-01-30T14:10:59.936255Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 14:11:00.314802 waagent[1924]: 2025-01-30T14:11:00.314611Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 14:11:00.323090 waagent[1924]: 2025-01-30T14:11:00.322964Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 14:11:00.334612 waagent[1924]: 2025-01-30T14:11:00.334559Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 14:11:00.355556 waagent[1924]: 2025-01-30T14:11:00.355509Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 30 14:11:00.362518 waagent[1924]: 2025-01-30T14:11:00.362470Z INFO Daemon Jan 30 14:11:00.365711 waagent[1924]: 2025-01-30T14:11:00.365665Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bb36d856-0227-4f69-ad6c-f9e2e300cbc8 eTag: 16793285354000058171 source: Fabric] Jan 30 14:11:00.378845 waagent[1924]: 2025-01-30T14:11:00.378795Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 14:11:00.386860 waagent[1924]: 2025-01-30T14:11:00.386813Z INFO Daemon Jan 30 14:11:00.389986 waagent[1924]: 2025-01-30T14:11:00.389941Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 14:11:00.402869 waagent[1924]: 2025-01-30T14:11:00.402833Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 14:11:00.498060 waagent[1924]: 2025-01-30T14:11:00.497965Z INFO Daemon Downloaded certificate {'thumbprint': 'F0D8B08867CC6E3F468803682A5D0BADEBA25161', 'hasPrivateKey': True} Jan 30 14:11:00.509513 waagent[1924]: 2025-01-30T14:11:00.509459Z INFO Daemon Downloaded certificate {'thumbprint': '4491E969706BCB696B4976360D07EDDB25A0AD2A', 'hasPrivateKey': False} Jan 30 14:11:00.520483 waagent[1924]: 2025-01-30T14:11:00.520430Z INFO Daemon Fetch goal state completed Jan 30 14:11:00.531461 waagent[1924]: 2025-01-30T14:11:00.531411Z INFO Daemon Daemon Starting provisioning Jan 30 14:11:00.537327 waagent[1924]: 2025-01-30T14:11:00.537268Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 14:11:00.542947 waagent[1924]: 2025-01-30T14:11:00.542900Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-eeb23789ea] Jan 30 14:11:00.825785 waagent[1924]: 2025-01-30T14:11:00.820751Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-eeb23789ea] Jan 30 14:11:00.827911 waagent[1924]: 2025-01-30T14:11:00.827849Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 14:11:00.834941 waagent[1924]: 2025-01-30T14:11:00.834887Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 14:11:00.863117 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:11:00.863125 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:11:00.863153 systemd-networkd[1394]: eth0: DHCP lease lost Jan 30 14:11:00.864613 waagent[1924]: 2025-01-30T14:11:00.864512Z INFO Daemon Daemon Create user account if not exists Jan 30 14:11:00.870296 waagent[1924]: 2025-01-30T14:11:00.870229Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 14:11:00.875938 waagent[1924]: 2025-01-30T14:11:00.875808Z INFO Daemon Daemon Configure sudoer Jan 30 14:11:00.876028 systemd-networkd[1394]: eth0: DHCPv6 lease lost Jan 30 14:11:00.880491 waagent[1924]: 2025-01-30T14:11:00.880425Z INFO Daemon Daemon Configure sshd Jan 30 14:11:00.885278 waagent[1924]: 2025-01-30T14:11:00.885219Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 14:11:00.899344 waagent[1924]: 2025-01-30T14:11:00.899271Z INFO Daemon Daemon Deploy ssh public key. Jan 30 14:11:00.914854 systemd-networkd[1394]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:11:02.265659 waagent[1924]: 2025-01-30T14:11:02.265607Z INFO Daemon Daemon Provisioning complete Jan 30 14:11:02.285690 waagent[1924]: 2025-01-30T14:11:02.285640Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 14:11:02.293281 waagent[1924]: 2025-01-30T14:11:02.293215Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 14:11:02.304607 waagent[1924]: 2025-01-30T14:11:02.304540Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 14:11:02.437741 waagent[2030]: 2025-01-30T14:11:02.437116Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 14:11:02.437741 waagent[2030]: 2025-01-30T14:11:02.437267Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 30 14:11:02.437741 waagent[2030]: 2025-01-30T14:11:02.437320Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 30 14:11:02.571629 waagent[2030]: 2025-01-30T14:11:02.571484Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 14:11:02.571989 waagent[2030]: 2025-01-30T14:11:02.571945Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:11:02.572132 waagent[2030]: 2025-01-30T14:11:02.572099Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:11:02.584182 waagent[2030]: 2025-01-30T14:11:02.584107Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 14:11:02.590547 waagent[2030]: 2025-01-30T14:11:02.590502Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 30 14:11:02.591208 waagent[2030]: 2025-01-30T14:11:02.591169Z INFO ExtHandler Jan 30 14:11:02.591781 waagent[2030]: 2025-01-30T14:11:02.591349Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c0c8f2d0-e1ec-4401-a193-f78133d89439 eTag: 16793285354000058171 source: Fabric] Jan 30 14:11:02.591781 waagent[2030]: 2025-01-30T14:11:02.591653Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 14:11:02.592395 waagent[2030]: 2025-01-30T14:11:02.592352Z INFO ExtHandler Jan 30 14:11:02.592534 waagent[2030]: 2025-01-30T14:11:02.592503Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 14:11:02.598050 waagent[2030]: 2025-01-30T14:11:02.598004Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 14:11:02.683491 waagent[2030]: 2025-01-30T14:11:02.683414Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F0D8B08867CC6E3F468803682A5D0BADEBA25161', 'hasPrivateKey': True} Jan 30 14:11:02.685295 waagent[2030]: 2025-01-30T14:11:02.684040Z INFO ExtHandler Downloaded certificate {'thumbprint': '4491E969706BCB696B4976360D07EDDB25A0AD2A', 'hasPrivateKey': False} Jan 30 14:11:02.685295 waagent[2030]: 2025-01-30T14:11:02.684449Z INFO ExtHandler Fetch goal state completed Jan 30 14:11:02.705145 waagent[2030]: 2025-01-30T14:11:02.705077Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2030 Jan 30 14:11:02.705794 waagent[2030]: 2025-01-30T14:11:02.705396Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 14:11:02.707190 waagent[2030]: 2025-01-30T14:11:02.707139Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 14:11:02.707582 waagent[2030]: 2025-01-30T14:11:02.707545Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 14:11:03.232029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:11:03.233532 waagent[2030]: 2025-01-30T14:11:03.232114Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 14:11:03.233532 waagent[2030]: 2025-01-30T14:11:03.232332Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 14:11:03.239123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:03.245585 waagent[2030]: 2025-01-30T14:11:03.245529Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 14:11:03.253198 systemd[1]: Reloading requested from client PID 2049 ('systemctl') (unit waagent.service)... Jan 30 14:11:03.253223 systemd[1]: Reloading... Jan 30 14:11:03.342833 zram_generator::config[2090]: No configuration found. Jan 30 14:11:03.442282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:11:03.518902 systemd[1]: Reloading finished in 265 ms. Jan 30 14:11:03.541900 waagent[2030]: 2025-01-30T14:11:03.541517Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 14:11:03.547706 systemd[1]: Reloading requested from client PID 2143 ('systemctl') (unit waagent.service)... Jan 30 14:11:03.547719 systemd[1]: Reloading... Jan 30 14:11:03.633861 zram_generator::config[2181]: No configuration found. Jan 30 14:11:03.742922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:11:03.822447 systemd[1]: Reloading finished in 274 ms. Jan 30 14:11:03.841790 waagent[2030]: 2025-01-30T14:11:03.841029Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 14:11:03.841790 waagent[2030]: 2025-01-30T14:11:03.841198Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 14:11:06.681929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:06.685077 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:06.745592 kubelet[2250]: E0130 14:11:06.745521 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:06.748548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:06.748716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:07.142833 waagent[2030]: 2025-01-30T14:11:07.142715Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 14:11:07.143422 waagent[2030]: 2025-01-30T14:11:07.143367Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 14:11:07.144253 waagent[2030]: 2025-01-30T14:11:07.144195Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 14:11:07.144746 waagent[2030]: 2025-01-30T14:11:07.144571Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 14:11:07.145818 waagent[2030]: 2025-01-30T14:11:07.144991Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:11:07.145818 waagent[2030]: 2025-01-30T14:11:07.145092Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:11:07.145818 waagent[2030]: 2025-01-30T14:11:07.145304Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 14:11:07.145818 waagent[2030]: 2025-01-30T14:11:07.145483Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 14:11:07.145818 waagent[2030]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 14:11:07.145818 waagent[2030]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 14:11:07.145818 waagent[2030]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 14:11:07.145818 waagent[2030]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:11:07.145818 waagent[2030]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:11:07.145818 waagent[2030]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:11:07.146174 waagent[2030]: 2025-01-30T14:11:07.146108Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 14:11:07.146324 waagent[2030]: 2025-01-30T14:11:07.146269Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:11:07.146421 waagent[2030]: 2025-01-30T14:11:07.146384Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 14:11:07.146879 waagent[2030]: 2025-01-30T14:11:07.146808Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 14:11:07.147133 waagent[2030]: 2025-01-30T14:11:07.147096Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 14:11:07.147222 waagent[2030]: 2025-01-30T14:11:07.147031Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:11:07.147298 waagent[2030]: 2025-01-30T14:11:07.147240Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 14:11:07.147816 waagent[2030]: 2025-01-30T14:11:07.147724Z INFO EnvHandler ExtHandler Configure routes Jan 30 14:11:07.148420 waagent[2030]: 2025-01-30T14:11:07.148364Z INFO EnvHandler ExtHandler Gateway:None Jan 30 14:11:07.149198 waagent[2030]: 2025-01-30T14:11:07.149148Z INFO EnvHandler ExtHandler Routes:None Jan 30 14:11:07.158925 waagent[2030]: 2025-01-30T14:11:07.158865Z INFO ExtHandler ExtHandler Jan 30 14:11:07.159198 waagent[2030]: 2025-01-30T14:11:07.159153Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 77c7da58-c613-4172-ab66-e04ae9371ae4 correlation c747f0c3-7ca2-46df-ad58-047442ace0e9 created: 2025-01-30T14:09:29.907364Z] Jan 30 14:11:07.159695 waagent[2030]: 2025-01-30T14:11:07.159641Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 14:11:07.160433 waagent[2030]: 2025-01-30T14:11:07.160377Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 30 14:11:07.188942 waagent[2030]: 2025-01-30T14:11:07.188881Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 14:11:07.188942 waagent[2030]: Executing ['ip', '-a', '-o', 'link']: Jan 30 14:11:07.188942 waagent[2030]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 14:11:07.188942 waagent[2030]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:b1:8a brd ff:ff:ff:ff:ff:ff Jan 30 14:11:07.188942 waagent[2030]: 3: enP50583s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:b1:8a brd ff:ff:ff:ff:ff:ff\ altname enP50583p0s2 Jan 30 14:11:07.188942 waagent[2030]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 14:11:07.188942 waagent[2030]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 14:11:07.188942 waagent[2030]: 2: eth0 inet 10.200.20.13/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 14:11:07.188942 waagent[2030]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 14:11:07.188942 waagent[2030]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 14:11:07.188942 waagent[2030]: 2: eth0 inet6 fe80::20d:3aff:fec3:b18a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 14:11:07.188942 waagent[2030]: 3: enP50583s1 inet6 fe80::20d:3aff:fec3:b18a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 14:11:07.232248 waagent[2030]: 2025-01-30T14:11:07.232184Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 33E6F1A8-E99D-4664-BE7A-948854994678;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 14:11:07.274423 waagent[2030]: 2025-01-30T14:11:07.274327Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 14:11:07.274423 waagent[2030]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:07.274423 waagent[2030]: pkts bytes target prot opt in out source destination Jan 30 14:11:07.274423 waagent[2030]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:07.274423 waagent[2030]: pkts bytes target prot opt in out source destination Jan 30 14:11:07.274423 waagent[2030]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:07.274423 waagent[2030]: pkts bytes target prot opt in out source destination Jan 30 14:11:07.274423 waagent[2030]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 14:11:07.274423 waagent[2030]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 14:11:07.274423 waagent[2030]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 14:11:07.278550 waagent[2030]: 2025-01-30T14:11:07.278474Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 14:11:07.278550 waagent[2030]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:07.278550 waagent[2030]: pkts bytes target prot opt in out source destination Jan 30 14:11:07.278550 waagent[2030]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:07.278550 waagent[2030]: pkts bytes target prot opt in out source destination Jan 30 14:11:07.278550 waagent[2030]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:11:07.278550 waagent[2030]: pkts bytes target prot opt in out source destination Jan 30 14:11:07.278550 waagent[2030]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 14:11:07.278550 waagent[2030]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 14:11:07.278550 waagent[2030]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 14:11:07.278868 waagent[2030]: 2025-01-30T14:11:07.278808Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 14:11:14.132111 chronyd[1776]: Selected source PHC0 Jan 30 14:11:16.982119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:11:16.990151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:17.434376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:17.437788 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:17.480174 kubelet[2299]: E0130 14:11:17.480133 2299 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:17.483339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:17.483490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:25.449850 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 30 14:11:27.732219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:11:27.739952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:29.074951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:29.079163 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:29.116864 kubelet[2321]: E0130 14:11:29.116825 2321 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:29.121967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:29.122134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:36.576824 update_engine[1803]: I20250130 14:11:36.576668 1803 update_attempter.cc:509] Updating boot flags... Jan 30 14:11:36.678827 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2342) Jan 30 14:11:36.752842 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2341) Jan 30 14:11:36.953529 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:11:36.962007 systemd[1]: Started sshd@0-10.200.20.13:22-10.200.16.10:45482.service - OpenSSH per-connection server daemon (10.200.16.10:45482). Jan 30 14:11:37.457287 sshd[2396]: Accepted publickey for core from 10.200.16.10 port 45482 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:37.458558 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:37.462227 systemd-logind[1798]: New session 3 of user core. Jan 30 14:11:37.474272 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:11:37.862997 systemd[1]: Started sshd@1-10.200.20.13:22-10.200.16.10:45494.service - OpenSSH per-connection server daemon (10.200.16.10:45494). Jan 30 14:11:38.308577 sshd[2401]: Accepted publickey for core from 10.200.16.10 port 45494 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:38.309917 sshd[2401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:38.313866 systemd-logind[1798]: New session 4 of user core. Jan 30 14:11:38.321143 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:11:38.640458 sshd[2401]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:38.643724 systemd-logind[1798]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:11:38.644729 systemd[1]: sshd@1-10.200.20.13:22-10.200.16.10:45494.service: Deactivated successfully. Jan 30 14:11:38.647893 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:11:38.649516 systemd-logind[1798]: Removed session 4. Jan 30 14:11:38.728273 systemd[1]: Started sshd@2-10.200.20.13:22-10.200.16.10:45502.service - OpenSSH per-connection server daemon (10.200.16.10:45502). Jan 30 14:11:39.173441 sshd[2409]: Accepted publickey for core from 10.200.16.10 port 45502 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:39.174723 sshd[2409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:39.175586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:11:39.184958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:39.189731 systemd-logind[1798]: New session 5 of user core. Jan 30 14:11:39.190867 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:11:39.421962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:39.422550 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:39.464420 kubelet[2425]: E0130 14:11:39.464293 2425 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:39.466991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:39.467167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:39.502386 sshd[2409]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:39.504870 systemd[1]: sshd@2-10.200.20.13:22-10.200.16.10:45502.service: Deactivated successfully. Jan 30 14:11:39.508533 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:11:39.509597 systemd-logind[1798]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:11:39.511206 systemd-logind[1798]: Removed session 5. Jan 30 14:11:39.590148 systemd[1]: Started sshd@3-10.200.20.13:22-10.200.16.10:45510.service - OpenSSH per-connection server daemon (10.200.16.10:45510). Jan 30 14:11:40.016894 sshd[2438]: Accepted publickey for core from 10.200.16.10 port 45510 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:40.018205 sshd[2438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:40.021986 systemd-logind[1798]: New session 6 of user core. Jan 30 14:11:40.033046 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:11:40.349004 sshd[2438]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:40.352799 systemd[1]: sshd@3-10.200.20.13:22-10.200.16.10:45510.service: Deactivated successfully. Jan 30 14:11:40.355844 systemd-logind[1798]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:11:40.356512 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:11:40.357496 systemd-logind[1798]: Removed session 6. Jan 30 14:11:40.425976 systemd[1]: Started sshd@4-10.200.20.13:22-10.200.16.10:45522.service - OpenSSH per-connection server daemon (10.200.16.10:45522). Jan 30 14:11:40.859329 sshd[2446]: Accepted publickey for core from 10.200.16.10 port 45522 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:40.860611 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:40.864623 systemd-logind[1798]: New session 7 of user core. Jan 30 14:11:40.870987 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:11:41.212056 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:11:41.212331 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:41.242028 sudo[2450]: pam_unix(sudo:session): session closed for user root Jan 30 14:11:41.312053 sshd[2446]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:41.317301 systemd[1]: sshd@4-10.200.20.13:22-10.200.16.10:45522.service: Deactivated successfully. Jan 30 14:11:41.319097 systemd-logind[1798]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:11:41.319636 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:11:41.320847 systemd-logind[1798]: Removed session 7. Jan 30 14:11:41.386981 systemd[1]: Started sshd@5-10.200.20.13:22-10.200.16.10:45536.service - OpenSSH per-connection server daemon (10.200.16.10:45536). Jan 30 14:11:41.816799 sshd[2455]: Accepted publickey for core from 10.200.16.10 port 45536 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:41.818125 sshd[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:41.822850 systemd-logind[1798]: New session 8 of user core. Jan 30 14:11:41.827978 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:11:42.065105 sudo[2460]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:11:42.065366 sudo[2460]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:42.068340 sudo[2460]: pam_unix(sudo:session): session closed for user root Jan 30 14:11:42.072641 sudo[2459]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:11:42.073194 sudo[2459]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:42.083992 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:11:42.086500 auditctl[2463]: No rules Jan 30 14:11:42.086838 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:11:42.087044 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:11:42.091327 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:11:42.112446 augenrules[2482]: No rules Jan 30 14:11:42.113632 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:11:42.115590 sudo[2459]: pam_unix(sudo:session): session closed for user root Jan 30 14:11:42.203161 sshd[2455]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:42.206471 systemd[1]: sshd@5-10.200.20.13:22-10.200.16.10:45536.service: Deactivated successfully. Jan 30 14:11:42.209273 systemd-logind[1798]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:11:42.209685 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:11:42.210655 systemd-logind[1798]: Removed session 8. Jan 30 14:11:42.287989 systemd[1]: Started sshd@6-10.200.20.13:22-10.200.16.10:45552.service - OpenSSH per-connection server daemon (10.200.16.10:45552). Jan 30 14:11:42.717348 sshd[2491]: Accepted publickey for core from 10.200.16.10 port 45552 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:11:42.718642 sshd[2491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:11:42.722767 systemd-logind[1798]: New session 9 of user core. Jan 30 14:11:42.733003 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:11:42.966124 sudo[2495]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:11:42.966701 sudo[2495]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:11:44.043038 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:11:44.043253 (dockerd)[2510]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:11:44.699057 dockerd[2510]: time="2025-01-30T14:11:44.698816277Z" level=info msg="Starting up" Jan 30 14:11:45.088715 dockerd[2510]: time="2025-01-30T14:11:45.088516984Z" level=info msg="Loading containers: start." Jan 30 14:11:45.228788 kernel: Initializing XFRM netlink socket Jan 30 14:11:45.374707 systemd-networkd[1394]: docker0: Link UP Jan 30 14:11:45.402315 dockerd[2510]: time="2025-01-30T14:11:45.401990349Z" level=info msg="Loading containers: done." Jan 30 14:11:45.422616 dockerd[2510]: time="2025-01-30T14:11:45.422561104Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:11:45.422797 dockerd[2510]: time="2025-01-30T14:11:45.422676624Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:11:45.422828 dockerd[2510]: time="2025-01-30T14:11:45.422819864Z" level=info msg="Daemon has completed initialization" Jan 30 14:11:45.482537 dockerd[2510]: time="2025-01-30T14:11:45.481820290Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:11:45.482178 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:11:47.737953 containerd[1828]: time="2025-01-30T14:11:47.737804714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:11:49.482049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 14:11:49.489926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:49.584306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:49.587007 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:49.621681 kubelet[2663]: E0130 14:11:49.621640 2663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:49.623726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:49.623894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:52.967079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4235907334.mount: Deactivated successfully. Jan 30 14:11:59.732079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 14:11:59.741961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:59.832957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:59.836874 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:59.878348 kubelet[2736]: E0130 14:11:59.878307 2736 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:59.880992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:59.881735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:01.544851 containerd[1828]: time="2025-01-30T14:12:01.544077458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:01.844559 containerd[1828]: time="2025-01-30T14:12:01.844430724Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 30 14:12:01.851525 containerd[1828]: time="2025-01-30T14:12:01.851465602Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:01.856155 containerd[1828]: time="2025-01-30T14:12:01.856083441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:01.857312 containerd[1828]: time="2025-01-30T14:12:01.857136760Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 14.119272726s" Jan 30 14:12:01.857312 containerd[1828]: time="2025-01-30T14:12:01.857173600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 14:12:01.876485 containerd[1828]: time="2025-01-30T14:12:01.876440474Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:12:03.511182 containerd[1828]: time="2025-01-30T14:12:03.511128817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:03.516460 containerd[1828]: time="2025-01-30T14:12:03.516255936Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 30 14:12:03.523634 containerd[1828]: time="2025-01-30T14:12:03.523604454Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:03.530737 containerd[1828]: time="2025-01-30T14:12:03.530672972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:03.531886 containerd[1828]: time="2025-01-30T14:12:03.531721771Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.655237417s" Jan 30 14:12:03.531886 containerd[1828]: time="2025-01-30T14:12:03.531753931Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 14:12:03.551226 containerd[1828]: time="2025-01-30T14:12:03.551010646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:12:04.779594 containerd[1828]: time="2025-01-30T14:12:04.779539193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:04.782296 containerd[1828]: time="2025-01-30T14:12:04.782084432Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 30 14:12:04.786751 containerd[1828]: time="2025-01-30T14:12:04.786707271Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:04.792475 containerd[1828]: time="2025-01-30T14:12:04.792431269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:04.793559 containerd[1828]: time="2025-01-30T14:12:04.793432829Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.242386303s" Jan 30 14:12:04.793559 containerd[1828]: time="2025-01-30T14:12:04.793466629Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 14:12:04.811573 containerd[1828]: time="2025-01-30T14:12:04.811513584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:12:06.048060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105593062.mount: Deactivated successfully. Jan 30 14:12:07.752051 containerd[1828]: time="2025-01-30T14:12:07.751994705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:07.754717 containerd[1828]: time="2025-01-30T14:12:07.754676705Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 30 14:12:07.757839 containerd[1828]: time="2025-01-30T14:12:07.757787104Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:07.802003 containerd[1828]: time="2025-01-30T14:12:07.801918132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:07.802835 containerd[1828]: time="2025-01-30T14:12:07.802582652Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 2.991029348s" Jan 30 14:12:07.802835 containerd[1828]: time="2025-01-30T14:12:07.802618532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 14:12:07.821817 containerd[1828]: time="2025-01-30T14:12:07.821777886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:12:09.982069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 14:12:09.988933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:10.080952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:10.092129 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:10.131642 kubelet[2790]: E0130 14:12:10.131587 2790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:10.135990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:10.136184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:20.232263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 14:12:20.240958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:24.886930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:24.900141 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:24.945683 kubelet[2810]: E0130 14:12:24.945620 2810 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:24.948425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:24.948704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:25.429358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757102192.mount: Deactivated successfully. Jan 30 14:12:27.048893 containerd[1828]: time="2025-01-30T14:12:27.048832007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:27.094139 containerd[1828]: time="2025-01-30T14:12:27.093990553Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 14:12:27.099265 containerd[1828]: time="2025-01-30T14:12:27.099196232Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:27.142648 containerd[1828]: time="2025-01-30T14:12:27.142559578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:27.143951 containerd[1828]: time="2025-01-30T14:12:27.143801138Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 19.321979092s" Jan 30 14:12:27.143951 containerd[1828]: time="2025-01-30T14:12:27.143846538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:12:27.164375 containerd[1828]: time="2025-01-30T14:12:27.164325572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:12:28.368554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104496140.mount: Deactivated successfully. Jan 30 14:12:28.593510 containerd[1828]: time="2025-01-30T14:12:28.593459772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:28.596818 containerd[1828]: time="2025-01-30T14:12:28.596769571Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 30 14:12:28.660986 containerd[1828]: time="2025-01-30T14:12:28.660670271Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:28.703779 containerd[1828]: time="2025-01-30T14:12:28.703688818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:28.704632 containerd[1828]: time="2025-01-30T14:12:28.704505458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.539973566s" Jan 30 14:12:28.704632 containerd[1828]: time="2025-01-30T14:12:28.704544218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 14:12:28.723465 containerd[1828]: time="2025-01-30T14:12:28.723249892Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:12:30.259236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24439305.mount: Deactivated successfully. Jan 30 14:12:34.982220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 14:12:34.995961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:35.094002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:35.107126 (kubelet)[2914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:35.145169 kubelet[2914]: E0130 14:12:35.145097 2914 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:35.149968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:35.150137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:42.877570 containerd[1828]: time="2025-01-30T14:12:42.877513652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:42.936167 containerd[1828]: time="2025-01-30T14:12:42.936125674Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 30 14:12:42.940196 containerd[1828]: time="2025-01-30T14:12:42.940133873Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:42.985004 containerd[1828]: time="2025-01-30T14:12:42.984903699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:42.985866 containerd[1828]: time="2025-01-30T14:12:42.985722539Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 14.262425927s" Jan 30 14:12:42.985866 containerd[1828]: time="2025-01-30T14:12:42.985753899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 14:12:45.232112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 14:12:45.239963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:45.716041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:45.719081 (kubelet)[3014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:45.769241 kubelet[3014]: E0130 14:12:45.769186 3014 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:45.772278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:45.772471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:48.934008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:48.946049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:48.972201 systemd[1]: Reloading requested from client PID 3030 ('systemctl') (unit session-9.scope)... Jan 30 14:12:48.972362 systemd[1]: Reloading... Jan 30 14:12:49.086793 zram_generator::config[3079]: No configuration found. Jan 30 14:12:49.193407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:12:49.266503 systemd[1]: Reloading finished in 293 ms. Jan 30 14:12:49.312117 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:12:49.312361 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:12:49.312750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:49.318040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:51.948620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:51.952746 (kubelet)[3149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:12:51.995022 kubelet[3149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:12:51.995022 kubelet[3149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:12:51.995022 kubelet[3149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:12:51.995405 kubelet[3149]: I0130 14:12:51.995089 3149 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:12:52.476071 kubelet[3149]: I0130 14:12:52.476032 3149 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:12:52.476071 kubelet[3149]: I0130 14:12:52.476063 3149 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:12:52.476288 kubelet[3149]: I0130 14:12:52.476269 3149 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:12:52.488244 kubelet[3149]: I0130 14:12:52.488111 3149 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:12:52.489320 kubelet[3149]: E0130 14:12:52.488605 3149 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:52.496693 kubelet[3149]: I0130 14:12:52.496662 3149 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:12:52.498000 kubelet[3149]: I0130 14:12:52.497959 3149 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:12:52.498187 kubelet[3149]: I0130 14:12:52.498004 3149 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-eeb23789ea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:12:52.498275 kubelet[3149]: I0130 14:12:52.498196 3149 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:12:52.498275 kubelet[3149]: I0130 14:12:52.498206 3149 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:12:52.498347 kubelet[3149]: I0130 14:12:52.498326 3149 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:12:52.499098 kubelet[3149]: I0130 14:12:52.499081 3149 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:12:52.499127 kubelet[3149]: I0130 14:12:52.499109 3149 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:12:52.499310 kubelet[3149]: I0130 14:12:52.499294 3149 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:12:52.499336 kubelet[3149]: I0130 14:12:52.499321 3149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:12:52.504570 kubelet[3149]: W0130 14:12:52.502561 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:52.504570 kubelet[3149]: E0130 14:12:52.502623 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:52.504570 kubelet[3149]: I0130 14:12:52.502783 3149 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:12:52.504570 kubelet[3149]: I0130 14:12:52.502958 3149 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:12:52.504570 kubelet[3149]: W0130 14:12:52.503002 3149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:12:52.504570 kubelet[3149]: W0130 14:12:52.504371 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:52.504570 kubelet[3149]: E0130 14:12:52.504426 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:52.506456 kubelet[3149]: I0130 14:12:52.506432 3149 server.go:1264] "Started kubelet" Jan 30 14:12:52.510942 kubelet[3149]: E0130 14:12:52.510814 3149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-eeb23789ea.181f7dddbd5b13a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-eeb23789ea,UID:ci-4081.3.0-a-eeb23789ea,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-eeb23789ea,},FirstTimestamp:2025-01-30 14:12:52.506407848 +0000 UTC m=+0.549875438,LastTimestamp:2025-01-30 14:12:52.506407848 +0000 UTC m=+0.549875438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-eeb23789ea,}" Jan 30 14:12:52.511251 kubelet[3149]: I0130 14:12:52.511223 3149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:12:52.511791 kubelet[3149]: I0130 14:12:52.511692 3149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:12:52.512037 kubelet[3149]: I0130 14:12:52.512008 3149 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:12:52.512788 kubelet[3149]: I0130 14:12:52.512770 3149 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:12:52.514647 kubelet[3149]: I0130 14:12:52.514617 3149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:12:52.515339 kubelet[3149]: E0130 14:12:52.515319 3149 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:12:52.516936 kubelet[3149]: E0130 14:12:52.516826 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:12:52.517279 kubelet[3149]: I0130 14:12:52.517068 3149 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:12:52.517722 kubelet[3149]: I0130 14:12:52.517438 3149 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:12:52.518454 kubelet[3149]: I0130 14:12:52.518437 3149 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:12:52.518931 kubelet[3149]: W0130 14:12:52.518891 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:52.519048 kubelet[3149]: E0130 14:12:52.519036 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:52.519728 kubelet[3149]: I0130 14:12:52.519680 3149 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:12:52.520719 kubelet[3149]: I0130 14:12:52.519984 3149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:12:52.520719 kubelet[3149]: E0130 14:12:52.520203 3149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eeb23789ea?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="200ms" Jan 30 14:12:52.521597 kubelet[3149]: I0130 14:12:52.521567 3149 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:12:52.556420 kubelet[3149]: I0130 14:12:52.556398 3149 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:12:52.556637 kubelet[3149]: I0130 14:12:52.556626 3149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:12:52.556703 kubelet[3149]: I0130 14:12:52.556696 3149 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:12:52.619482 kubelet[3149]: I0130 14:12:52.619452 3149 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:52.620048 kubelet[3149]: E0130 14:12:52.620019 3149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:52.720682 kubelet[3149]: E0130 14:12:52.720644 3149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eeb23789ea?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="400ms" Jan 30 14:12:52.822637 kubelet[3149]: I0130 14:12:52.822604 3149 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:52.823016 kubelet[3149]: E0130 14:12:52.822986 3149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.171896 kubelet[3149]: E0130 14:12:53.121648 3149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eeb23789ea?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="800ms" Jan 30 14:12:55.171896 kubelet[3149]: I0130 14:12:53.224693 3149 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.171896 kubelet[3149]: E0130 14:12:53.225044 3149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.171896 kubelet[3149]: W0130 14:12:53.374149 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.171896 kubelet[3149]: E0130 14:12:53.374212 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.171896 kubelet[3149]: W0130 14:12:53.572226 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.171896 kubelet[3149]: E0130 14:12:53.572265 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.172398 kubelet[3149]: W0130 14:12:53.673048 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.172398 kubelet[3149]: E0130 14:12:53.673091 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.172398 kubelet[3149]: E0130 14:12:53.922314 3149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eeb23789ea?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="1.6s" Jan 30 14:12:55.172398 kubelet[3149]: I0130 14:12:54.027686 3149 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.172398 kubelet[3149]: E0130 14:12:54.028135 3149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.172398 kubelet[3149]: E0130 14:12:54.613057 3149 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.234030 kubelet[3149]: I0130 14:12:55.233973 3149 policy_none.go:49] "None policy: Start" Jan 30 14:12:55.235032 kubelet[3149]: I0130 14:12:55.235004 3149 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:12:55.235032 kubelet[3149]: I0130 14:12:55.235040 3149 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:12:55.243442 kubelet[3149]: I0130 14:12:55.243403 3149 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:12:55.243682 kubelet[3149]: I0130 14:12:55.243626 3149 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:12:55.243806 kubelet[3149]: I0130 14:12:55.243791 3149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:12:55.248164 kubelet[3149]: E0130 14:12:55.248123 3149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:12:55.354974 kubelet[3149]: I0130 14:12:55.354711 3149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:12:55.356540 kubelet[3149]: I0130 14:12:55.356512 3149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:12:55.356732 kubelet[3149]: I0130 14:12:55.356606 3149 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:12:55.356732 kubelet[3149]: I0130 14:12:55.356629 3149 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:12:55.357062 kubelet[3149]: E0130 14:12:55.356894 3149 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 14:12:55.359183 kubelet[3149]: W0130 14:12:55.359112 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.359183 kubelet[3149]: E0130 14:12:55.359182 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.457783 kubelet[3149]: I0130 14:12:55.457396 3149 topology_manager.go:215] "Topology Admit Handler" podUID="dda280959be7122770f57a77eeb4b82d" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.462147 kubelet[3149]: I0130 14:12:55.460279 3149 topology_manager.go:215] "Topology Admit Handler" podUID="0060872ffdb2e991907d125dc1e432f1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.465207 kubelet[3149]: I0130 14:12:55.465167 3149 topology_manager.go:215] "Topology Admit Handler" podUID="c63f4e5fb10aa6d7bc778c8deaa8b399" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.523346 kubelet[3149]: E0130 14:12:55.523293 3149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eeb23789ea?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="3.2s" Jan 30 14:12:55.533596 kubelet[3149]: I0130 14:12:55.533559 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c63f4e5fb10aa6d7bc778c8deaa8b399-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-eeb23789ea\" (UID: \"c63f4e5fb10aa6d7bc778c8deaa8b399\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533668 kubelet[3149]: I0130 14:12:55.533599 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dda280959be7122770f57a77eeb4b82d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eeb23789ea\" (UID: \"dda280959be7122770f57a77eeb4b82d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533668 kubelet[3149]: I0130 14:12:55.533624 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533668 kubelet[3149]: I0130 14:12:55.533643 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533668 kubelet[3149]: I0130 14:12:55.533659 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533780 kubelet[3149]: I0130 14:12:55.533675 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533780 kubelet[3149]: I0130 14:12:55.533693 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533780 kubelet[3149]: I0130 14:12:55.533710 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dda280959be7122770f57a77eeb4b82d-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eeb23789ea\" (UID: \"dda280959be7122770f57a77eeb4b82d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.533780 kubelet[3149]: I0130 14:12:55.533727 3149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dda280959be7122770f57a77eeb4b82d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-eeb23789ea\" (UID: \"dda280959be7122770f57a77eeb4b82d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.569415 kubelet[3149]: W0130 14:12:55.569356 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.569514 kubelet[3149]: E0130 14:12:55.569437 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.630319 kubelet[3149]: I0130 14:12:55.630277 3149 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.630713 kubelet[3149]: E0130 14:12:55.630684 3149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:55.754724 kubelet[3149]: E0130 14:12:55.754614 3149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-eeb23789ea.181f7dddbd5b13a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-eeb23789ea,UID:ci-4081.3.0-a-eeb23789ea,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-eeb23789ea,},FirstTimestamp:2025-01-30 14:12:52.506407848 +0000 UTC m=+0.549875438,LastTimestamp:2025-01-30 14:12:52.506407848 +0000 UTC m=+0.549875438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-eeb23789ea,}" Jan 30 14:12:55.768759 containerd[1828]: time="2025-01-30T14:12:55.768715284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-eeb23789ea,Uid:dda280959be7122770f57a77eeb4b82d,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:55.772898 containerd[1828]: time="2025-01-30T14:12:55.772860003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-eeb23789ea,Uid:0060872ffdb2e991907d125dc1e432f1,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:55.775520 containerd[1828]: time="2025-01-30T14:12:55.775411402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-eeb23789ea,Uid:c63f4e5fb10aa6d7bc778c8deaa8b399,Namespace:kube-system,Attempt:0,}" Jan 30 14:12:55.873689 kubelet[3149]: W0130 14:12:55.873589 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:55.873689 kubelet[3149]: E0130 14:12:55.873658 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:56.296678 kubelet[3149]: W0130 14:12:56.296587 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:56.296678 kubelet[3149]: E0130 14:12:56.296655 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:56.526987 kubelet[3149]: W0130 14:12:56.526920 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:56.526987 kubelet[3149]: E0130 14:12:56.526989 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:57.579438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531522150.mount: Deactivated successfully. Jan 30 14:12:57.872840 containerd[1828]: time="2025-01-30T14:12:57.871882188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:57.874902 containerd[1828]: time="2025-01-30T14:12:57.874833107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 14:12:57.934867 containerd[1828]: time="2025-01-30T14:12:57.934806885Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:57.938803 containerd[1828]: time="2025-01-30T14:12:57.938451684Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:57.983913 containerd[1828]: time="2025-01-30T14:12:57.983849907Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:57.987098 containerd[1828]: time="2025-01-30T14:12:57.987053426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:12:58.024967 containerd[1828]: time="2025-01-30T14:12:58.024868092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:12:58.077458 containerd[1828]: time="2025-01-30T14:12:58.077378672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:12:58.080497 containerd[1828]: time="2025-01-30T14:12:58.080082151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.304576909s" Jan 30 14:12:58.083375 containerd[1828]: time="2025-01-30T14:12:58.083147430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.314321786s" Jan 30 14:12:58.131176 containerd[1828]: time="2025-01-30T14:12:58.131037692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.358092809s" Jan 30 14:12:58.670977 kubelet[3149]: W0130 14:12:58.670905 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:58.670977 kubelet[3149]: E0130 14:12:58.670948 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:58.724631 kubelet[3149]: E0130 14:12:58.724573 3149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eeb23789ea?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="6.4s" Jan 30 14:12:58.811172 kubelet[3149]: E0130 14:12:58.811140 3149 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:58.833384 kubelet[3149]: I0130 14:12:58.833079 3149 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:58.833629 kubelet[3149]: E0130 14:12:58.833587 3149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:12:58.899346 kubelet[3149]: W0130 14:12:58.899277 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:58.899346 kubelet[3149]: E0130 14:12:58.899323 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:12:59.369207 containerd[1828]: time="2025-01-30T14:12:59.369024811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:59.369836 containerd[1828]: time="2025-01-30T14:12:59.369554251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:59.369836 containerd[1828]: time="2025-01-30T14:12:59.369620851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:59.369836 containerd[1828]: time="2025-01-30T14:12:59.369633331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:59.369836 containerd[1828]: time="2025-01-30T14:12:59.369701131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:59.369836 containerd[1828]: time="2025-01-30T14:12:59.369773451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:59.372061 containerd[1828]: time="2025-01-30T14:12:59.371753251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:59.372309 containerd[1828]: time="2025-01-30T14:12:59.372212850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:59.376281 containerd[1828]: time="2025-01-30T14:12:59.376150849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:12:59.376281 containerd[1828]: time="2025-01-30T14:12:59.376221329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:12:59.376281 containerd[1828]: time="2025-01-30T14:12:59.376237129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:59.376525 containerd[1828]: time="2025-01-30T14:12:59.376327969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:12:59.512252 containerd[1828]: time="2025-01-30T14:12:59.512078611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-eeb23789ea,Uid:0060872ffdb2e991907d125dc1e432f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"25a892fafc765f7f9935cc88b10104536b3e0d676018bf3eea1c88e689cffb1d\"" Jan 30 14:12:59.513911 containerd[1828]: time="2025-01-30T14:12:59.513878011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-eeb23789ea,Uid:dda280959be7122770f57a77eeb4b82d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d20800531c3ee4580a745d4a474f70c52697ea4b3a3d497262083a7473eaac4b\"" Jan 30 14:12:59.519632 containerd[1828]: time="2025-01-30T14:12:59.519327529Z" level=info msg="CreateContainer within sandbox \"25a892fafc765f7f9935cc88b10104536b3e0d676018bf3eea1c88e689cffb1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:12:59.519740 containerd[1828]: time="2025-01-30T14:12:59.519473849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-eeb23789ea,Uid:c63f4e5fb10aa6d7bc778c8deaa8b399,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f353f3a0ded899770a52f6fff48b550f2255e0d81e5a25ffd5d066d57e534f8\"" Jan 30 14:12:59.521379 containerd[1828]: time="2025-01-30T14:12:59.521290529Z" level=info msg="CreateContainer within sandbox \"d20800531c3ee4580a745d4a474f70c52697ea4b3a3d497262083a7473eaac4b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:12:59.524711 containerd[1828]: time="2025-01-30T14:12:59.524669408Z" level=info msg="CreateContainer within sandbox \"6f353f3a0ded899770a52f6fff48b550f2255e0d81e5a25ffd5d066d57e534f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:13:00.814234 kubelet[3149]: W0130 14:13:00.814166 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:13:00.814234 kubelet[3149]: E0130 14:13:00.814210 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:13:00.880124 kubelet[3149]: W0130 14:13:00.880055 3149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:13:00.880124 kubelet[3149]: E0130 14:13:00.880102 3149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eeb23789ea&limit=500&resourceVersion=0": dial tcp 10.200.20.13:6443: connect: connection refused Jan 30 14:13:02.486310 containerd[1828]: time="2025-01-30T14:13:02.486149939Z" level=info msg="CreateContainer within sandbox \"25a892fafc765f7f9935cc88b10104536b3e0d676018bf3eea1c88e689cffb1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"454786b44d09809336a0d1b599578efc0c84fcfc4762ea0a75ac556144f9ba79\"" Jan 30 14:13:02.487259 containerd[1828]: time="2025-01-30T14:13:02.487219219Z" level=info msg="StartContainer for \"454786b44d09809336a0d1b599578efc0c84fcfc4762ea0a75ac556144f9ba79\"" Jan 30 14:13:02.582752 containerd[1828]: time="2025-01-30T14:13:02.582322672Z" level=info msg="CreateContainer within sandbox \"6f353f3a0ded899770a52f6fff48b550f2255e0d81e5a25ffd5d066d57e534f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"665b5143eed81f99b5cf8424513d8891b6e60daee8660f267efcbc9c56891817\"" Jan 30 14:13:02.582752 containerd[1828]: time="2025-01-30T14:13:02.582475592Z" level=info msg="StartContainer for \"454786b44d09809336a0d1b599578efc0c84fcfc4762ea0a75ac556144f9ba79\" returns successfully" Jan 30 14:13:02.586128 containerd[1828]: time="2025-01-30T14:13:02.586044271Z" level=info msg="StartContainer for \"665b5143eed81f99b5cf8424513d8891b6e60daee8660f267efcbc9c56891817\"" Jan 30 14:13:02.684371 containerd[1828]: time="2025-01-30T14:13:02.684309164Z" level=info msg="CreateContainer within sandbox \"d20800531c3ee4580a745d4a474f70c52697ea4b3a3d497262083a7473eaac4b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f73b7e1999852cb94187ef6c3056b4bbc17ae35f15040ada15228493d45e5658\"" Jan 30 14:13:02.684519 containerd[1828]: time="2025-01-30T14:13:02.684426444Z" level=info msg="StartContainer for \"665b5143eed81f99b5cf8424513d8891b6e60daee8660f267efcbc9c56891817\" returns successfully" Jan 30 14:13:02.685253 containerd[1828]: time="2025-01-30T14:13:02.685209763Z" level=info msg="StartContainer for \"f73b7e1999852cb94187ef6c3056b4bbc17ae35f15040ada15228493d45e5658\"" Jan 30 14:13:02.823708 containerd[1828]: time="2025-01-30T14:13:02.823022245Z" level=info msg="StartContainer for \"f73b7e1999852cb94187ef6c3056b4bbc17ae35f15040ada15228493d45e5658\" returns successfully" Jan 30 14:13:03.088300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866162365.mount: Deactivated successfully. Jan 30 14:13:05.153560 kubelet[3149]: E0130 14:13:05.153484 3149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-eeb23789ea\" not found" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:05.192909 kubelet[3149]: E0130 14:13:05.192868 3149 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.0-a-eeb23789ea" not found Jan 30 14:13:05.237672 kubelet[3149]: I0130 14:13:05.237598 3149 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:05.248477 kubelet[3149]: E0130 14:13:05.248351 3149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.249482 kubelet[3149]: I0130 14:13:05.249285 3149 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:05.259317 kubelet[3149]: E0130 14:13:05.259265 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.360258 kubelet[3149]: E0130 14:13:05.360202 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.461326 kubelet[3149]: E0130 14:13:05.461094 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.562069 kubelet[3149]: E0130 14:13:05.562017 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.662770 kubelet[3149]: E0130 14:13:05.662712 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.763882 kubelet[3149]: E0130 14:13:05.763811 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.865349 kubelet[3149]: E0130 14:13:05.864664 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:05.965507 kubelet[3149]: E0130 14:13:05.965321 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.065745 kubelet[3149]: E0130 14:13:06.065443 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.165973 kubelet[3149]: E0130 14:13:06.165937 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.266457 kubelet[3149]: E0130 14:13:06.266404 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.367616 kubelet[3149]: E0130 14:13:06.367458 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.467702 kubelet[3149]: E0130 14:13:06.467658 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.567942 kubelet[3149]: E0130 14:13:06.567901 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.668690 kubelet[3149]: E0130 14:13:06.668565 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.769181 kubelet[3149]: E0130 14:13:06.769136 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.869701 kubelet[3149]: E0130 14:13:06.869634 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:06.887354 systemd[1]: Reloading requested from client PID 3429 ('systemctl') (unit session-9.scope)... Jan 30 14:13:06.887377 systemd[1]: Reloading... Jan 30 14:13:06.970917 kubelet[3149]: E0130 14:13:06.970240 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:07.007810 zram_generator::config[3472]: No configuration found. Jan 30 14:13:07.070471 kubelet[3149]: E0130 14:13:07.070424 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:07.123949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:13:07.171006 kubelet[3149]: E0130 14:13:07.170956 3149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eeb23789ea\" not found" Jan 30 14:13:07.202968 systemd[1]: Reloading finished in 315 ms. Jan 30 14:13:07.231998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:07.245121 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:13:07.245456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:07.253237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:07.425832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:07.434235 (kubelet)[3543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:13:07.481716 kubelet[3543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:07.481716 kubelet[3543]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:13:07.481716 kubelet[3543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:07.482116 kubelet[3543]: I0130 14:13:07.481783 3543 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:13:07.489367 kubelet[3543]: I0130 14:13:07.486773 3543 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:13:07.489367 kubelet[3543]: I0130 14:13:07.486797 3543 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:13:07.489367 kubelet[3543]: I0130 14:13:07.487009 3543 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:13:07.489367 kubelet[3543]: I0130 14:13:07.488427 3543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:13:07.490421 kubelet[3543]: I0130 14:13:07.490386 3543 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:13:07.499343 kubelet[3543]: I0130 14:13:07.499218 3543 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:13:07.500297 kubelet[3543]: I0130 14:13:07.500142 3543 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:13:07.500697 kubelet[3543]: I0130 14:13:07.500182 3543 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-eeb23789ea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:13:07.500697 kubelet[3543]: I0130 14:13:07.500657 3543 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:13:07.500697 kubelet[3543]: I0130 14:13:07.500667 3543 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:13:07.501288 kubelet[3543]: I0130 14:13:07.501265 3543 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:07.501406 kubelet[3543]: I0130 14:13:07.501389 3543 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:13:07.501406 kubelet[3543]: I0130 14:13:07.501405 3543 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:13:07.501459 kubelet[3543]: I0130 14:13:07.501437 3543 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:13:07.501459 kubelet[3543]: I0130 14:13:07.501450 3543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:13:07.504920 kubelet[3543]: I0130 14:13:07.504892 3543 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:13:07.505102 kubelet[3543]: I0130 14:13:07.505067 3543 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:13:07.505475 kubelet[3543]: I0130 14:13:07.505452 3543 server.go:1264] "Started kubelet" Jan 30 14:13:07.515805 kubelet[3543]: I0130 14:13:07.511085 3543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:13:07.527110 kubelet[3543]: I0130 14:13:07.526416 3543 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:13:07.528038 kubelet[3543]: I0130 14:13:07.527978 3543 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:13:07.529355 kubelet[3543]: I0130 14:13:07.529267 3543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:13:07.529663 kubelet[3543]: I0130 14:13:07.529644 3543 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:13:07.531803 kubelet[3543]: I0130 14:13:07.531723 3543 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:13:07.532781 kubelet[3543]: I0130 14:13:07.532715 3543 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:13:07.533367 kubelet[3543]: I0130 14:13:07.533022 3543 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:13:07.540198 kubelet[3543]: I0130 14:13:07.540154 3543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:13:07.541285 kubelet[3543]: I0130 14:13:07.541253 3543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:13:07.541387 kubelet[3543]: I0130 14:13:07.541315 3543 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:13:07.541387 kubelet[3543]: I0130 14:13:07.541348 3543 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:13:07.541449 kubelet[3543]: E0130 14:13:07.541424 3543 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:13:07.546811 kubelet[3543]: I0130 14:13:07.546176 3543 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:13:07.546811 kubelet[3543]: I0130 14:13:07.546306 3543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:13:07.551823 kubelet[3543]: I0130 14:13:07.551743 3543 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:13:07.630382 kubelet[3543]: I0130 14:13:07.630353 3543 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:13:07.630573 kubelet[3543]: I0130 14:13:07.630558 3543 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:13:07.630635 kubelet[3543]: I0130 14:13:07.630627 3543 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:07.630947 kubelet[3543]: I0130 14:13:07.630931 3543 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:13:07.631048 kubelet[3543]: I0130 14:13:07.631022 3543 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:13:07.631186 kubelet[3543]: I0130 14:13:07.631091 3543 policy_none.go:49] "None policy: Start" Jan 30 14:13:07.633391 kubelet[3543]: I0130 14:13:07.633067 3543 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:13:07.633391 kubelet[3543]: I0130 14:13:07.633097 3543 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:13:07.633391 kubelet[3543]: I0130 14:13:07.633264 3543 state_mem.go:75] "Updated machine memory state" Jan 30 14:13:07.634434 kubelet[3543]: I0130 14:13:07.634401 3543 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:13:07.634624 kubelet[3543]: I0130 14:13:07.634581 3543 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:13:07.637857 kubelet[3543]: I0130 14:13:07.637821 3543 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.638499 kubelet[3543]: I0130 14:13:07.638347 3543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:13:07.641849 kubelet[3543]: I0130 14:13:07.641814 3543 topology_manager.go:215] "Topology Admit Handler" podUID="0060872ffdb2e991907d125dc1e432f1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.642628 kubelet[3543]: I0130 14:13:07.642057 3543 topology_manager.go:215] "Topology Admit Handler" podUID="c63f4e5fb10aa6d7bc778c8deaa8b399" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.642628 kubelet[3543]: I0130 14:13:07.642100 3543 topology_manager.go:215] "Topology Admit Handler" podUID="dda280959be7122770f57a77eeb4b82d" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.662186 kubelet[3543]: I0130 14:13:07.662150 3543 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.662475 kubelet[3543]: I0130 14:13:07.662396 3543 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.665029 kubelet[3543]: W0130 14:13:07.664944 3543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:13:07.666308 kubelet[3543]: W0130 14:13:07.666045 3543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:13:07.666308 kubelet[3543]: W0130 14:13:07.666169 3543 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:13:07.735874 kubelet[3543]: I0130 14:13:07.735788 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.735874 kubelet[3543]: I0130 14:13:07.735838 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c63f4e5fb10aa6d7bc778c8deaa8b399-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-eeb23789ea\" (UID: \"c63f4e5fb10aa6d7bc778c8deaa8b399\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.735874 kubelet[3543]: I0130 14:13:07.735861 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dda280959be7122770f57a77eeb4b82d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-eeb23789ea\" (UID: \"dda280959be7122770f57a77eeb4b82d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.735874 kubelet[3543]: I0130 14:13:07.735881 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.736151 kubelet[3543]: I0130 14:13:07.735897 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.736151 kubelet[3543]: I0130 14:13:07.735915 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.736151 kubelet[3543]: I0130 14:13:07.735931 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dda280959be7122770f57a77eeb4b82d-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eeb23789ea\" (UID: \"dda280959be7122770f57a77eeb4b82d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.736151 kubelet[3543]: I0130 14:13:07.735947 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dda280959be7122770f57a77eeb4b82d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eeb23789ea\" (UID: \"dda280959be7122770f57a77eeb4b82d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:07.736151 kubelet[3543]: I0130 14:13:07.735962 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0060872ffdb2e991907d125dc1e432f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-eeb23789ea\" (UID: \"0060872ffdb2e991907d125dc1e432f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" Jan 30 14:13:09.180326 kubelet[3543]: I0130 14:13:08.508013 3543 apiserver.go:52] "Watching apiserver" Jan 30 14:13:09.180326 kubelet[3543]: I0130 14:13:08.533309 3543 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:13:09.180326 kubelet[3543]: I0130 14:13:08.622371 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-eeb23789ea" podStartSLOduration=1.622354932 podStartE2EDuration="1.622354932s" podCreationTimestamp="2025-01-30 14:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:08.622070252 +0000 UTC m=+1.184439429" watchObservedRunningTime="2025-01-30 14:13:08.622354932 +0000 UTC m=+1.184724069" Jan 30 14:13:09.180326 kubelet[3543]: I0130 14:13:08.635058 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eeb23789ea" podStartSLOduration=1.635040251 podStartE2EDuration="1.635040251s" podCreationTimestamp="2025-01-30 14:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:08.634963691 +0000 UTC m=+1.197332988" watchObservedRunningTime="2025-01-30 14:13:08.635040251 +0000 UTC m=+1.197409388" Jan 30 14:13:12.700573 kubelet[3543]: I0130 14:13:12.700503 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-eeb23789ea" podStartSLOduration=5.700486004 podStartE2EDuration="5.700486004s" podCreationTimestamp="2025-01-30 14:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:08.650897329 +0000 UTC m=+1.213266506" watchObservedRunningTime="2025-01-30 14:13:12.700486004 +0000 UTC m=+5.262855181" Jan 30 14:13:13.973186 sudo[2495]: pam_unix(sudo:session): session closed for user root Jan 30 14:13:14.061187 sshd[2491]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:14.064424 systemd[1]: sshd@6-10.200.20.13:22-10.200.16.10:45552.service: Deactivated successfully. Jan 30 14:13:14.069438 systemd-logind[1798]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:13:14.070956 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:13:14.072520 systemd-logind[1798]: Removed session 9. Jan 30 14:13:22.774058 kubelet[3543]: I0130 14:13:22.774007 3543 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:13:22.775321 containerd[1828]: time="2025-01-30T14:13:22.775027782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:13:22.776131 kubelet[3543]: I0130 14:13:22.775352 3543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:13:23.615866 kubelet[3543]: I0130 14:13:23.613983 3543 topology_manager.go:215] "Topology Admit Handler" podUID="9fc73515-34b0-49b9-ba84-0b1b8847f84d" podNamespace="kube-system" podName="kube-proxy-dwd9d" Jan 30 14:13:23.690670 kubelet[3543]: I0130 14:13:23.690014 3543 topology_manager.go:215] "Topology Admit Handler" podUID="b7498768-3af1-4719-94ac-64b66cd6ce28" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-qv4rf" Jan 30 14:13:23.741885 kubelet[3543]: I0130 14:13:23.741843 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fc73515-34b0-49b9-ba84-0b1b8847f84d-kube-proxy\") pod \"kube-proxy-dwd9d\" (UID: \"9fc73515-34b0-49b9-ba84-0b1b8847f84d\") " pod="kube-system/kube-proxy-dwd9d" Jan 30 14:13:23.741885 kubelet[3543]: I0130 14:13:23.741887 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc73515-34b0-49b9-ba84-0b1b8847f84d-xtables-lock\") pod \"kube-proxy-dwd9d\" (UID: \"9fc73515-34b0-49b9-ba84-0b1b8847f84d\") " pod="kube-system/kube-proxy-dwd9d" Jan 30 14:13:23.741885 kubelet[3543]: I0130 14:13:23.741910 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc73515-34b0-49b9-ba84-0b1b8847f84d-lib-modules\") pod \"kube-proxy-dwd9d\" (UID: \"9fc73515-34b0-49b9-ba84-0b1b8847f84d\") " pod="kube-system/kube-proxy-dwd9d" Jan 30 14:13:23.741885 kubelet[3543]: I0130 14:13:23.741929 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh4nx\" (UniqueName: \"kubernetes.io/projected/9fc73515-34b0-49b9-ba84-0b1b8847f84d-kube-api-access-wh4nx\") pod \"kube-proxy-dwd9d\" (UID: \"9fc73515-34b0-49b9-ba84-0b1b8847f84d\") " pod="kube-system/kube-proxy-dwd9d" Jan 30 14:13:23.842555 kubelet[3543]: I0130 14:13:23.842416 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7pz\" (UniqueName: \"kubernetes.io/projected/b7498768-3af1-4719-94ac-64b66cd6ce28-kube-api-access-mk7pz\") pod \"tigera-operator-7bc55997bb-qv4rf\" (UID: \"b7498768-3af1-4719-94ac-64b66cd6ce28\") " pod="tigera-operator/tigera-operator-7bc55997bb-qv4rf" Jan 30 14:13:23.842555 kubelet[3543]: I0130 14:13:23.842497 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b7498768-3af1-4719-94ac-64b66cd6ce28-var-lib-calico\") pod \"tigera-operator-7bc55997bb-qv4rf\" (UID: \"b7498768-3af1-4719-94ac-64b66cd6ce28\") " pod="tigera-operator/tigera-operator-7bc55997bb-qv4rf" Jan 30 14:13:23.920367 containerd[1828]: time="2025-01-30T14:13:23.920234088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dwd9d,Uid:9fc73515-34b0-49b9-ba84-0b1b8847f84d,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:23.972682 containerd[1828]: time="2025-01-30T14:13:23.972568072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:23.972682 containerd[1828]: time="2025-01-30T14:13:23.972627432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:23.972682 containerd[1828]: time="2025-01-30T14:13:23.972644632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:23.973274 containerd[1828]: time="2025-01-30T14:13:23.972937312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:24.001727 containerd[1828]: time="2025-01-30T14:13:24.000675384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-qv4rf,Uid:b7498768-3af1-4719-94ac-64b66cd6ce28,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:13:24.010197 containerd[1828]: time="2025-01-30T14:13:24.010052942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dwd9d,Uid:9fc73515-34b0-49b9-ba84-0b1b8847f84d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bd66304daf97ffd25b2e73f26c96fa3702741e062cc2af7c7aea5c607992f71\"" Jan 30 14:13:24.014895 containerd[1828]: time="2025-01-30T14:13:24.014856820Z" level=info msg="CreateContainer within sandbox \"1bd66304daf97ffd25b2e73f26c96fa3702741e062cc2af7c7aea5c607992f71\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:13:24.064903 containerd[1828]: time="2025-01-30T14:13:24.064728006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:24.065595 containerd[1828]: time="2025-01-30T14:13:24.065510765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:24.065595 containerd[1828]: time="2025-01-30T14:13:24.065552045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:24.065815 containerd[1828]: time="2025-01-30T14:13:24.065708845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:24.070397 containerd[1828]: time="2025-01-30T14:13:24.070142444Z" level=info msg="CreateContainer within sandbox \"1bd66304daf97ffd25b2e73f26c96fa3702741e062cc2af7c7aea5c607992f71\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b68404afbe8e1fb5a42c530b939ec0e509578d601e1c61e9742b43408858908\"" Jan 30 14:13:24.072808 containerd[1828]: time="2025-01-30T14:13:24.071939204Z" level=info msg="StartContainer for \"6b68404afbe8e1fb5a42c530b939ec0e509578d601e1c61e9742b43408858908\"" Jan 30 14:13:24.115291 containerd[1828]: time="2025-01-30T14:13:24.115243071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-qv4rf,Uid:b7498768-3af1-4719-94ac-64b66cd6ce28,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e3eee3a071f6a55092693078b40293daf1624d27f74324b0a378b8fcb609a9ad\"" Jan 30 14:13:24.118613 containerd[1828]: time="2025-01-30T14:13:24.118582550Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:13:24.138631 containerd[1828]: time="2025-01-30T14:13:24.138583064Z" level=info msg="StartContainer for \"6b68404afbe8e1fb5a42c530b939ec0e509578d601e1c61e9742b43408858908\" returns successfully" Jan 30 14:13:24.649336 kubelet[3543]: I0130 14:13:24.649002 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dwd9d" podStartSLOduration=1.648973715 podStartE2EDuration="1.648973715s" podCreationTimestamp="2025-01-30 14:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:24.648453155 +0000 UTC m=+17.210822332" watchObservedRunningTime="2025-01-30 14:13:24.648973715 +0000 UTC m=+17.211342892" Jan 30 14:13:25.838728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205338091.mount: Deactivated successfully. Jan 30 14:13:26.237695 containerd[1828]: time="2025-01-30T14:13:26.236967092Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:26.241954 containerd[1828]: time="2025-01-30T14:13:26.241914051Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 14:13:26.246858 containerd[1828]: time="2025-01-30T14:13:26.246825569Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:26.252401 containerd[1828]: time="2025-01-30T14:13:26.252359208Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:26.253292 containerd[1828]: time="2025-01-30T14:13:26.253243967Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.134434177s" Jan 30 14:13:26.253292 containerd[1828]: time="2025-01-30T14:13:26.253288887Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 14:13:26.256690 containerd[1828]: time="2025-01-30T14:13:26.256133687Z" level=info msg="CreateContainer within sandbox \"e3eee3a071f6a55092693078b40293daf1624d27f74324b0a378b8fcb609a9ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:13:26.311913 containerd[1828]: time="2025-01-30T14:13:26.311870390Z" level=info msg="CreateContainer within sandbox \"e3eee3a071f6a55092693078b40293daf1624d27f74324b0a378b8fcb609a9ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a409a5c8b99558665a2edbc21c0543f8b8fda6f0a5417f5a7f4be66f54546447\"" Jan 30 14:13:26.313857 containerd[1828]: time="2025-01-30T14:13:26.312500590Z" level=info msg="StartContainer for \"a409a5c8b99558665a2edbc21c0543f8b8fda6f0a5417f5a7f4be66f54546447\"" Jan 30 14:13:26.368525 containerd[1828]: time="2025-01-30T14:13:26.368286494Z" level=info msg="StartContainer for \"a409a5c8b99558665a2edbc21c0543f8b8fda6f0a5417f5a7f4be66f54546447\" returns successfully" Jan 30 14:13:26.658093 kubelet[3543]: I0130 14:13:26.657896 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-qv4rf" podStartSLOduration=1.5208924320000001 podStartE2EDuration="3.657876889s" podCreationTimestamp="2025-01-30 14:13:23 +0000 UTC" firstStartedPulling="2025-01-30 14:13:24.11755435 +0000 UTC m=+16.679923527" lastFinishedPulling="2025-01-30 14:13:26.254538807 +0000 UTC m=+18.816907984" observedRunningTime="2025-01-30 14:13:26.65710809 +0000 UTC m=+19.219477227" watchObservedRunningTime="2025-01-30 14:13:26.657876889 +0000 UTC m=+19.220246066" Jan 30 14:13:30.314680 kubelet[3543]: I0130 14:13:30.314626 3543 topology_manager.go:215] "Topology Admit Handler" podUID="57d1ea6c-40fa-4600-91c5-875bd1a82b9c" podNamespace="calico-system" podName="calico-typha-59f578796d-9pq9m" Jan 30 14:13:30.428634 kubelet[3543]: I0130 14:13:30.428308 3543 topology_manager.go:215] "Topology Admit Handler" podUID="fc525066-ac7e-4508-9349-60e4e8e88fee" podNamespace="calico-system" podName="calico-node-l6nqx" Jan 30 14:13:30.483007 kubelet[3543]: I0130 14:13:30.482840 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-cni-bin-dir\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483007 kubelet[3543]: I0130 14:13:30.482883 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-var-run-calico\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483007 kubelet[3543]: I0130 14:13:30.482901 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-policysync\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483007 kubelet[3543]: I0130 14:13:30.482918 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57d1ea6c-40fa-4600-91c5-875bd1a82b9c-tigera-ca-bundle\") pod \"calico-typha-59f578796d-9pq9m\" (UID: \"57d1ea6c-40fa-4600-91c5-875bd1a82b9c\") " pod="calico-system/calico-typha-59f578796d-9pq9m" Jan 30 14:13:30.483007 kubelet[3543]: I0130 14:13:30.482937 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc525066-ac7e-4508-9349-60e4e8e88fee-node-certs\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483253 kubelet[3543]: I0130 14:13:30.482957 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-var-lib-calico\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483253 kubelet[3543]: I0130 14:13:30.482975 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4mz\" (UniqueName: \"kubernetes.io/projected/fc525066-ac7e-4508-9349-60e4e8e88fee-kube-api-access-fc4mz\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483253 kubelet[3543]: I0130 14:13:30.483030 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-flexvol-driver-host\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483253 kubelet[3543]: I0130 14:13:30.483068 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc525066-ac7e-4508-9349-60e4e8e88fee-tigera-ca-bundle\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483253 kubelet[3543]: I0130 14:13:30.483087 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/57d1ea6c-40fa-4600-91c5-875bd1a82b9c-typha-certs\") pod \"calico-typha-59f578796d-9pq9m\" (UID: \"57d1ea6c-40fa-4600-91c5-875bd1a82b9c\") " pod="calico-system/calico-typha-59f578796d-9pq9m" Jan 30 14:13:30.483361 kubelet[3543]: I0130 14:13:30.483103 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdscx\" (UniqueName: \"kubernetes.io/projected/57d1ea6c-40fa-4600-91c5-875bd1a82b9c-kube-api-access-sdscx\") pod \"calico-typha-59f578796d-9pq9m\" (UID: \"57d1ea6c-40fa-4600-91c5-875bd1a82b9c\") " pod="calico-system/calico-typha-59f578796d-9pq9m" Jan 30 14:13:30.483361 kubelet[3543]: I0130 14:13:30.483120 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-lib-modules\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483361 kubelet[3543]: I0130 14:13:30.483134 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-xtables-lock\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483361 kubelet[3543]: I0130 14:13:30.483150 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-cni-log-dir\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.483361 kubelet[3543]: I0130 14:13:30.483167 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc525066-ac7e-4508-9349-60e4e8e88fee-cni-net-dir\") pod \"calico-node-l6nqx\" (UID: \"fc525066-ac7e-4508-9349-60e4e8e88fee\") " pod="calico-system/calico-node-l6nqx" Jan 30 14:13:30.586883 kubelet[3543]: E0130 14:13:30.585706 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.586883 kubelet[3543]: W0130 14:13:30.585738 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.586883 kubelet[3543]: E0130 14:13:30.585864 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.589321 kubelet[3543]: E0130 14:13:30.587202 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.589321 kubelet[3543]: W0130 14:13:30.587226 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.589321 kubelet[3543]: E0130 14:13:30.587245 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.589321 kubelet[3543]: E0130 14:13:30.587834 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.589321 kubelet[3543]: W0130 14:13:30.587867 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.589321 kubelet[3543]: E0130 14:13:30.587882 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.589321 kubelet[3543]: E0130 14:13:30.588316 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.589321 kubelet[3543]: W0130 14:13:30.588329 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.589321 kubelet[3543]: E0130 14:13:30.588361 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.589321 kubelet[3543]: E0130 14:13:30.588652 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.589648 kubelet[3543]: W0130 14:13:30.588664 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.589648 kubelet[3543]: E0130 14:13:30.588693 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.589648 kubelet[3543]: E0130 14:13:30.589060 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.589648 kubelet[3543]: W0130 14:13:30.589073 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.589648 kubelet[3543]: E0130 14:13:30.589083 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.595511 kubelet[3543]: E0130 14:13:30.595459 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.595511 kubelet[3543]: W0130 14:13:30.595488 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.595663 kubelet[3543]: E0130 14:13:30.595640 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.599808 kubelet[3543]: E0130 14:13:30.597649 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.599808 kubelet[3543]: W0130 14:13:30.597721 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.599808 kubelet[3543]: E0130 14:13:30.597802 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.599808 kubelet[3543]: E0130 14:13:30.598392 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.599808 kubelet[3543]: W0130 14:13:30.598404 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.599808 kubelet[3543]: E0130 14:13:30.598483 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.599808 kubelet[3543]: E0130 14:13:30.598738 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.599808 kubelet[3543]: W0130 14:13:30.598748 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.599808 kubelet[3543]: E0130 14:13:30.598870 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.599808 kubelet[3543]: E0130 14:13:30.598998 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.600140 kubelet[3543]: W0130 14:13:30.599006 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.600140 kubelet[3543]: E0130 14:13:30.599122 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.600140 kubelet[3543]: E0130 14:13:30.599253 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.600140 kubelet[3543]: W0130 14:13:30.599263 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.600140 kubelet[3543]: E0130 14:13:30.599377 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.600140 kubelet[3543]: E0130 14:13:30.599505 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.600140 kubelet[3543]: W0130 14:13:30.599515 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.600140 kubelet[3543]: E0130 14:13:30.599598 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.600140 kubelet[3543]: E0130 14:13:30.599729 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.600140 kubelet[3543]: W0130 14:13:30.599738 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.600341 kubelet[3543]: E0130 14:13:30.599811 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.600341 kubelet[3543]: E0130 14:13:30.600027 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.600341 kubelet[3543]: W0130 14:13:30.600037 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.600341 kubelet[3543]: E0130 14:13:30.600109 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.600341 kubelet[3543]: E0130 14:13:30.600242 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.600341 kubelet[3543]: W0130 14:13:30.600291 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.600470 kubelet[3543]: E0130 14:13:30.600372 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.603648 kubelet[3543]: E0130 14:13:30.600536 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.603648 kubelet[3543]: W0130 14:13:30.600554 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.603648 kubelet[3543]: E0130 14:13:30.600637 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.603648 kubelet[3543]: E0130 14:13:30.600959 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.603648 kubelet[3543]: W0130 14:13:30.600994 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.603648 kubelet[3543]: E0130 14:13:30.601077 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.603648 kubelet[3543]: E0130 14:13:30.601291 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.603648 kubelet[3543]: W0130 14:13:30.601302 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.603648 kubelet[3543]: E0130 14:13:30.601413 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.603648 kubelet[3543]: E0130 14:13:30.601595 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.608072 kubelet[3543]: W0130 14:13:30.601604 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.608072 kubelet[3543]: E0130 14:13:30.601717 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.608072 kubelet[3543]: E0130 14:13:30.601948 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.608072 kubelet[3543]: W0130 14:13:30.601957 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.608072 kubelet[3543]: E0130 14:13:30.602138 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.608072 kubelet[3543]: E0130 14:13:30.602488 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.608072 kubelet[3543]: W0130 14:13:30.602499 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.608072 kubelet[3543]: E0130 14:13:30.602567 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.608072 kubelet[3543]: E0130 14:13:30.602656 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.608072 kubelet[3543]: W0130 14:13:30.602663 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.608278 kubelet[3543]: E0130 14:13:30.602881 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.608278 kubelet[3543]: E0130 14:13:30.602924 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.608278 kubelet[3543]: W0130 14:13:30.602930 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.608278 kubelet[3543]: E0130 14:13:30.603013 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.608278 kubelet[3543]: E0130 14:13:30.603103 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.608278 kubelet[3543]: W0130 14:13:30.603110 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.608278 kubelet[3543]: E0130 14:13:30.603166 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.608278 kubelet[3543]: E0130 14:13:30.603247 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.608278 kubelet[3543]: W0130 14:13:30.603253 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.608278 kubelet[3543]: E0130 14:13:30.603301 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609129 kubelet[3543]: E0130 14:13:30.603438 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609129 kubelet[3543]: W0130 14:13:30.603447 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609129 kubelet[3543]: E0130 14:13:30.603598 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609129 kubelet[3543]: E0130 14:13:30.603634 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609129 kubelet[3543]: W0130 14:13:30.603639 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609129 kubelet[3543]: E0130 14:13:30.603708 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609129 kubelet[3543]: E0130 14:13:30.603822 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609129 kubelet[3543]: W0130 14:13:30.603829 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609129 kubelet[3543]: E0130 14:13:30.603950 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609129 kubelet[3543]: E0130 14:13:30.603984 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609340 kubelet[3543]: W0130 14:13:30.603990 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609340 kubelet[3543]: E0130 14:13:30.604049 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609340 kubelet[3543]: E0130 14:13:30.604172 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609340 kubelet[3543]: W0130 14:13:30.604179 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609340 kubelet[3543]: E0130 14:13:30.604235 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609340 kubelet[3543]: E0130 14:13:30.604337 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609340 kubelet[3543]: W0130 14:13:30.604343 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609340 kubelet[3543]: E0130 14:13:30.604400 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609340 kubelet[3543]: E0130 14:13:30.604498 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609340 kubelet[3543]: W0130 14:13:30.604509 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609556 kubelet[3543]: E0130 14:13:30.604569 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609556 kubelet[3543]: E0130 14:13:30.604663 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609556 kubelet[3543]: W0130 14:13:30.604669 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609556 kubelet[3543]: E0130 14:13:30.604706 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.609556 kubelet[3543]: E0130 14:13:30.604942 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609556 kubelet[3543]: W0130 14:13:30.604952 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609556 kubelet[3543]: E0130 14:13:30.605076 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609556 kubelet[3543]: W0130 14:13:30.605082 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609556 kubelet[3543]: E0130 14:13:30.605192 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.609556 kubelet[3543]: W0130 14:13:30.605198 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.609556 kubelet[3543]: E0130 14:13:30.605301 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.610537 kubelet[3543]: W0130 14:13:30.605307 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.610537 kubelet[3543]: E0130 14:13:30.605411 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.610537 kubelet[3543]: W0130 14:13:30.605417 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.610537 kubelet[3543]: E0130 14:13:30.605554 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.610537 kubelet[3543]: W0130 14:13:30.605560 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.610537 kubelet[3543]: E0130 14:13:30.605681 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.610537 kubelet[3543]: W0130 14:13:30.605688 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.610537 kubelet[3543]: E0130 14:13:30.605856 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.610537 kubelet[3543]: W0130 14:13:30.605866 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.610537 kubelet[3543]: E0130 14:13:30.605992 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606009 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.610955 kubelet[3543]: W0130 14:13:30.606016 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606027 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606057 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606077 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606090 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606100 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606114 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606171 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.610955 kubelet[3543]: E0130 14:13:30.606198 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.611158 kubelet[3543]: E0130 14:13:30.606835 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.611158 kubelet[3543]: W0130 14:13:30.606852 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.611158 kubelet[3543]: E0130 14:13:30.606878 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.611158 kubelet[3543]: E0130 14:13:30.608509 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.611158 kubelet[3543]: W0130 14:13:30.608528 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.611158 kubelet[3543]: E0130 14:13:30.608555 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.611158 kubelet[3543]: E0130 14:13:30.610157 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.611158 kubelet[3543]: W0130 14:13:30.610176 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.611158 kubelet[3543]: E0130 14:13:30.610206 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.611716 kubelet[3543]: E0130 14:13:30.611676 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.611716 kubelet[3543]: W0130 14:13:30.611705 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.611848 kubelet[3543]: E0130 14:13:30.611734 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.613394 kubelet[3543]: E0130 14:13:30.612896 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.613394 kubelet[3543]: W0130 14:13:30.612919 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.613394 kubelet[3543]: E0130 14:13:30.612946 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.616069 kubelet[3543]: E0130 14:13:30.615035 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.616069 kubelet[3543]: W0130 14:13:30.615061 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.616069 kubelet[3543]: E0130 14:13:30.615083 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.617859 kubelet[3543]: E0130 14:13:30.617725 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.617859 kubelet[3543]: W0130 14:13:30.617778 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.617859 kubelet[3543]: E0130 14:13:30.617806 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.618829 kubelet[3543]: I0130 14:13:30.618491 3543 topology_manager.go:215] "Topology Admit Handler" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" podNamespace="calico-system" podName="csi-node-driver-mgkms" Jan 30 14:13:30.618829 kubelet[3543]: E0130 14:13:30.618783 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:30.630658 kubelet[3543]: E0130 14:13:30.630539 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.630658 kubelet[3543]: W0130 14:13:30.630564 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.630658 kubelet[3543]: E0130 14:13:30.630587 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.642682 kubelet[3543]: E0130 14:13:30.642629 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.642682 kubelet[3543]: W0130 14:13:30.642659 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.642682 kubelet[3543]: E0130 14:13:30.642682 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.650543 kubelet[3543]: E0130 14:13:30.650510 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.650543 kubelet[3543]: W0130 14:13:30.650546 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.651887 kubelet[3543]: E0130 14:13:30.650566 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.686354 kubelet[3543]: E0130 14:13:30.685978 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.686354 kubelet[3543]: W0130 14:13:30.686001 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.686354 kubelet[3543]: E0130 14:13:30.686022 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.687833 kubelet[3543]: E0130 14:13:30.687266 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.688269 kubelet[3543]: W0130 14:13:30.688082 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.688269 kubelet[3543]: E0130 14:13:30.688145 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.688582 kubelet[3543]: E0130 14:13:30.688460 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.688582 kubelet[3543]: W0130 14:13:30.688474 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.688582 kubelet[3543]: E0130 14:13:30.688486 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.688801 kubelet[3543]: E0130 14:13:30.688789 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.688894 kubelet[3543]: W0130 14:13:30.688882 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.689035 kubelet[3543]: E0130 14:13:30.688952 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.689308 kubelet[3543]: E0130 14:13:30.689294 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.689391 kubelet[3543]: W0130 14:13:30.689380 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.689534 kubelet[3543]: E0130 14:13:30.689432 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.689873 kubelet[3543]: E0130 14:13:30.689774 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.690847 kubelet[3543]: W0130 14:13:30.690287 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.690847 kubelet[3543]: E0130 14:13:30.690412 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.691227 kubelet[3543]: E0130 14:13:30.691184 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.691584 kubelet[3543]: W0130 14:13:30.691282 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.691584 kubelet[3543]: E0130 14:13:30.691316 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.692094 kubelet[3543]: E0130 14:13:30.692007 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.692310 kubelet[3543]: W0130 14:13:30.692178 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.692310 kubelet[3543]: E0130 14:13:30.692199 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.692451 kubelet[3543]: E0130 14:13:30.692438 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.692505 kubelet[3543]: W0130 14:13:30.692494 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.692562 kubelet[3543]: E0130 14:13:30.692550 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.692788 kubelet[3543]: E0130 14:13:30.692776 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.692949 kubelet[3543]: W0130 14:13:30.692855 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.692949 kubelet[3543]: E0130 14:13:30.692871 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.693084 kubelet[3543]: E0130 14:13:30.693072 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.693140 kubelet[3543]: W0130 14:13:30.693130 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.693193 kubelet[3543]: E0130 14:13:30.693184 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.693418 kubelet[3543]: E0130 14:13:30.693405 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.693632 kubelet[3543]: W0130 14:13:30.693514 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.693632 kubelet[3543]: E0130 14:13:30.693533 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.693803 kubelet[3543]: E0130 14:13:30.693792 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.693876 kubelet[3543]: W0130 14:13:30.693864 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.693929 kubelet[3543]: E0130 14:13:30.693919 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.694281 kubelet[3543]: E0130 14:13:30.694156 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.694281 kubelet[3543]: W0130 14:13:30.694193 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.694281 kubelet[3543]: E0130 14:13:30.694206 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.694443 kubelet[3543]: E0130 14:13:30.694430 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.694498 kubelet[3543]: W0130 14:13:30.694488 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.694547 kubelet[3543]: E0130 14:13:30.694538 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.696518 kubelet[3543]: E0130 14:13:30.696057 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.697431 kubelet[3543]: W0130 14:13:30.696617 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.697431 kubelet[3543]: E0130 14:13:30.696642 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.698934 kubelet[3543]: E0130 14:13:30.698911 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.699321 kubelet[3543]: W0130 14:13:30.699282 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.700242 kubelet[3543]: E0130 14:13:30.699681 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.701258 kubelet[3543]: E0130 14:13:30.701068 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.702249 kubelet[3543]: W0130 14:13:30.701589 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.702249 kubelet[3543]: E0130 14:13:30.701627 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.704032 kubelet[3543]: E0130 14:13:30.703998 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.704032 kubelet[3543]: W0130 14:13:30.704023 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.704136 kubelet[3543]: E0130 14:13:30.704045 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.704727 kubelet[3543]: E0130 14:13:30.704698 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.704727 kubelet[3543]: W0130 14:13:30.704720 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.704834 kubelet[3543]: E0130 14:13:30.704736 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.736542 containerd[1828]: time="2025-01-30T14:13:30.736486819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6nqx,Uid:fc525066-ac7e-4508-9349-60e4e8e88fee,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:30.786035 kubelet[3543]: E0130 14:13:30.786006 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.786683 kubelet[3543]: W0130 14:13:30.786268 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.786683 kubelet[3543]: E0130 14:13:30.786298 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.786683 kubelet[3543]: I0130 14:13:30.786337 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfb6f\" (UniqueName: \"kubernetes.io/projected/41b34c4a-b7b3-49a0-aec8-339d6c10a9dc-kube-api-access-bfb6f\") pod \"csi-node-driver-mgkms\" (UID: \"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc\") " pod="calico-system/csi-node-driver-mgkms" Jan 30 14:13:30.787366 kubelet[3543]: E0130 14:13:30.786983 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.787366 kubelet[3543]: W0130 14:13:30.787002 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.787366 kubelet[3543]: E0130 14:13:30.787026 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.787366 kubelet[3543]: I0130 14:13:30.787047 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41b34c4a-b7b3-49a0-aec8-339d6c10a9dc-socket-dir\") pod \"csi-node-driver-mgkms\" (UID: \"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc\") " pod="calico-system/csi-node-driver-mgkms" Jan 30 14:13:30.788080 kubelet[3543]: E0130 14:13:30.787741 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.788080 kubelet[3543]: W0130 14:13:30.787775 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.788080 kubelet[3543]: E0130 14:13:30.787797 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.788080 kubelet[3543]: I0130 14:13:30.787815 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41b34c4a-b7b3-49a0-aec8-339d6c10a9dc-kubelet-dir\") pod \"csi-node-driver-mgkms\" (UID: \"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc\") " pod="calico-system/csi-node-driver-mgkms" Jan 30 14:13:30.788580 kubelet[3543]: E0130 14:13:30.788560 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.788782 kubelet[3543]: W0130 14:13:30.788745 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.789137 kubelet[3543]: E0130 14:13:30.788962 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.789137 kubelet[3543]: I0130 14:13:30.789000 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41b34c4a-b7b3-49a0-aec8-339d6c10a9dc-registration-dir\") pod \"csi-node-driver-mgkms\" (UID: \"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc\") " pod="calico-system/csi-node-driver-mgkms" Jan 30 14:13:30.789366 kubelet[3543]: E0130 14:13:30.789350 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.790465 kubelet[3543]: W0130 14:13:30.789543 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.790465 kubelet[3543]: E0130 14:13:30.789583 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.790797 kubelet[3543]: E0130 14:13:30.790633 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.790797 kubelet[3543]: W0130 14:13:30.790650 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.790797 kubelet[3543]: E0130 14:13:30.790682 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.790986 kubelet[3543]: E0130 14:13:30.790971 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.791054 kubelet[3543]: W0130 14:13:30.791041 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.791144 kubelet[3543]: E0130 14:13:30.791119 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.791517 kubelet[3543]: E0130 14:13:30.791486 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.791634 kubelet[3543]: W0130 14:13:30.791618 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.791732 kubelet[3543]: E0130 14:13:30.791705 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.791816 kubelet[3543]: I0130 14:13:30.791742 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/41b34c4a-b7b3-49a0-aec8-339d6c10a9dc-varrun\") pod \"csi-node-driver-mgkms\" (UID: \"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc\") " pod="calico-system/csi-node-driver-mgkms" Jan 30 14:13:30.792411 kubelet[3543]: E0130 14:13:30.792390 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.792566 kubelet[3543]: W0130 14:13:30.792547 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.792681 kubelet[3543]: E0130 14:13:30.792640 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.793026 kubelet[3543]: E0130 14:13:30.793006 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.793724 kubelet[3543]: W0130 14:13:30.793541 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.793724 kubelet[3543]: E0130 14:13:30.793575 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.793929 kubelet[3543]: E0130 14:13:30.793914 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.794190 kubelet[3543]: W0130 14:13:30.794074 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.794190 kubelet[3543]: E0130 14:13:30.794105 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.794727 kubelet[3543]: E0130 14:13:30.794508 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.794727 kubelet[3543]: W0130 14:13:30.794524 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.794727 kubelet[3543]: E0130 14:13:30.794536 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.795214 kubelet[3543]: E0130 14:13:30.795195 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.795710 kubelet[3543]: W0130 14:13:30.795536 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.795710 kubelet[3543]: E0130 14:13:30.795565 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.795939 kubelet[3543]: E0130 14:13:30.795923 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.796229 kubelet[3543]: W0130 14:13:30.796212 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.796325 kubelet[3543]: E0130 14:13:30.796313 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.797013 kubelet[3543]: E0130 14:13:30.796995 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.797199 kubelet[3543]: W0130 14:13:30.797183 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.797478 kubelet[3543]: E0130 14:13:30.797402 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.802594 containerd[1828]: time="2025-01-30T14:13:30.802262733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:30.802594 containerd[1828]: time="2025-01-30T14:13:30.802354173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:30.802594 containerd[1828]: time="2025-01-30T14:13:30.802366173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:30.802594 containerd[1828]: time="2025-01-30T14:13:30.802472852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:30.838261 containerd[1828]: time="2025-01-30T14:13:30.838089547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6nqx,Uid:fc525066-ac7e-4508-9349-60e4e8e88fee,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d089dd922bba9fe83e123864e4485c1bd87d2bc1ecd588b6b681338a43b0642\"" Jan 30 14:13:30.844547 containerd[1828]: time="2025-01-30T14:13:30.844059423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:13:30.896373 kubelet[3543]: E0130 14:13:30.896342 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.896749 kubelet[3543]: W0130 14:13:30.896523 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.896749 kubelet[3543]: E0130 14:13:30.896565 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.898908 kubelet[3543]: E0130 14:13:30.898857 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.898908 kubelet[3543]: W0130 14:13:30.899395 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.898908 kubelet[3543]: E0130 14:13:30.899433 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.899835 kubelet[3543]: E0130 14:13:30.899697 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.899835 kubelet[3543]: W0130 14:13:30.899742 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.899835 kubelet[3543]: E0130 14:13:30.899801 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.901154 kubelet[3543]: E0130 14:13:30.900030 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.901154 kubelet[3543]: W0130 14:13:30.900050 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.901154 kubelet[3543]: E0130 14:13:30.900060 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.901154 kubelet[3543]: E0130 14:13:30.900314 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.901154 kubelet[3543]: W0130 14:13:30.900324 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.901154 kubelet[3543]: E0130 14:13:30.900333 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.901154 kubelet[3543]: E0130 14:13:30.900547 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.901154 kubelet[3543]: W0130 14:13:30.900555 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.901154 kubelet[3543]: E0130 14:13:30.900563 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.901154 kubelet[3543]: E0130 14:13:30.900829 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904049 kubelet[3543]: W0130 14:13:30.900839 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904049 kubelet[3543]: E0130 14:13:30.900848 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904049 kubelet[3543]: E0130 14:13:30.901017 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904049 kubelet[3543]: W0130 14:13:30.901025 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904049 kubelet[3543]: E0130 14:13:30.901032 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904049 kubelet[3543]: E0130 14:13:30.901238 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904049 kubelet[3543]: W0130 14:13:30.901245 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904049 kubelet[3543]: E0130 14:13:30.901260 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904049 kubelet[3543]: E0130 14:13:30.901438 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904049 kubelet[3543]: W0130 14:13:30.901448 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904348 kubelet[3543]: E0130 14:13:30.901455 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904348 kubelet[3543]: E0130 14:13:30.901664 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904348 kubelet[3543]: W0130 14:13:30.901671 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904348 kubelet[3543]: E0130 14:13:30.901689 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904348 kubelet[3543]: E0130 14:13:30.901892 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904348 kubelet[3543]: W0130 14:13:30.901899 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904348 kubelet[3543]: E0130 14:13:30.901907 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904348 kubelet[3543]: E0130 14:13:30.902071 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904348 kubelet[3543]: W0130 14:13:30.902078 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904348 kubelet[3543]: E0130 14:13:30.902085 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904631 kubelet[3543]: E0130 14:13:30.902593 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904631 kubelet[3543]: W0130 14:13:30.902604 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904631 kubelet[3543]: E0130 14:13:30.902661 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904631 kubelet[3543]: E0130 14:13:30.902850 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904631 kubelet[3543]: W0130 14:13:30.902858 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904631 kubelet[3543]: E0130 14:13:30.902877 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904631 kubelet[3543]: E0130 14:13:30.903064 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.904631 kubelet[3543]: W0130 14:13:30.903071 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.904631 kubelet[3543]: E0130 14:13:30.903078 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.904631 kubelet[3543]: E0130 14:13:30.903307 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906217 kubelet[3543]: W0130 14:13:30.903314 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906217 kubelet[3543]: E0130 14:13:30.903322 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906217 kubelet[3543]: E0130 14:13:30.903475 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906217 kubelet[3543]: W0130 14:13:30.903481 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906217 kubelet[3543]: E0130 14:13:30.903490 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906217 kubelet[3543]: E0130 14:13:30.903630 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906217 kubelet[3543]: W0130 14:13:30.903639 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906217 kubelet[3543]: E0130 14:13:30.903646 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906217 kubelet[3543]: E0130 14:13:30.903913 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906217 kubelet[3543]: W0130 14:13:30.903922 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906579 kubelet[3543]: E0130 14:13:30.903935 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906579 kubelet[3543]: E0130 14:13:30.904406 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906579 kubelet[3543]: W0130 14:13:30.904416 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906579 kubelet[3543]: E0130 14:13:30.904426 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906579 kubelet[3543]: E0130 14:13:30.904635 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906579 kubelet[3543]: W0130 14:13:30.904643 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906579 kubelet[3543]: E0130 14:13:30.904650 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906579 kubelet[3543]: E0130 14:13:30.904827 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906579 kubelet[3543]: W0130 14:13:30.904846 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906579 kubelet[3543]: E0130 14:13:30.904883 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906886 kubelet[3543]: E0130 14:13:30.905036 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906886 kubelet[3543]: W0130 14:13:30.905042 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906886 kubelet[3543]: E0130 14:13:30.905049 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.906886 kubelet[3543]: E0130 14:13:30.905234 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.906886 kubelet[3543]: W0130 14:13:30.905277 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.906886 kubelet[3543]: E0130 14:13:30.905285 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.915741 kubelet[3543]: E0130 14:13:30.915714 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:30.915741 kubelet[3543]: W0130 14:13:30.915831 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:30.915741 kubelet[3543]: E0130 14:13:30.915854 3543 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:30.919835 containerd[1828]: time="2025-01-30T14:13:30.919791370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59f578796d-9pq9m,Uid:57d1ea6c-40fa-4600-91c5-875bd1a82b9c,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:30.979610 containerd[1828]: time="2025-01-30T14:13:30.979107528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:30.979885 containerd[1828]: time="2025-01-30T14:13:30.979635208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:30.979885 containerd[1828]: time="2025-01-30T14:13:30.979668808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:30.979956 containerd[1828]: time="2025-01-30T14:13:30.979882568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:31.029512 containerd[1828]: time="2025-01-30T14:13:31.029460893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59f578796d-9pq9m,Uid:57d1ea6c-40fa-4600-91c5-875bd1a82b9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ea7efa3ed2b547f737f5afd4ec6b3d246d5c1af9b486441fe0ccface8e48e18\"" Jan 30 14:13:32.541927 kubelet[3543]: E0130 14:13:32.541855 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:34.542309 kubelet[3543]: E0130 14:13:34.542235 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:36.542138 kubelet[3543]: E0130 14:13:36.542090 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:37.113315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210870447.mount: Deactivated successfully. Jan 30 14:13:37.242349 containerd[1828]: time="2025-01-30T14:13:37.242291185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:37.245251 containerd[1828]: time="2025-01-30T14:13:37.245084344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 30 14:13:37.249192 containerd[1828]: time="2025-01-30T14:13:37.248754063Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:37.254058 containerd[1828]: time="2025-01-30T14:13:37.253973981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:37.254656 containerd[1828]: time="2025-01-30T14:13:37.254627381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 6.410519518s" Jan 30 14:13:37.254784 containerd[1828]: time="2025-01-30T14:13:37.254745101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 14:13:37.256568 containerd[1828]: time="2025-01-30T14:13:37.256530661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:13:37.258699 containerd[1828]: time="2025-01-30T14:13:37.257848620Z" level=info msg="CreateContainer within sandbox \"4d089dd922bba9fe83e123864e4485c1bd87d2bc1ecd588b6b681338a43b0642\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:13:37.317561 containerd[1828]: time="2025-01-30T14:13:37.317495682Z" level=info msg="CreateContainer within sandbox \"4d089dd922bba9fe83e123864e4485c1bd87d2bc1ecd588b6b681338a43b0642\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"16007ac1c5384b984be86619a295c40e12486d1e50d4b21586d3f7ee9af5368d\"" Jan 30 14:13:37.320040 containerd[1828]: time="2025-01-30T14:13:37.318498362Z" level=info msg="StartContainer for \"16007ac1c5384b984be86619a295c40e12486d1e50d4b21586d3f7ee9af5368d\"" Jan 30 14:13:37.394139 containerd[1828]: time="2025-01-30T14:13:37.393902140Z" level=info msg="StartContainer for \"16007ac1c5384b984be86619a295c40e12486d1e50d4b21586d3f7ee9af5368d\" returns successfully" Jan 30 14:13:37.907043 containerd[1828]: time="2025-01-30T14:13:37.701474929Z" level=error msg="collecting metrics for 16007ac1c5384b984be86619a295c40e12486d1e50d4b21586d3f7ee9af5368d" error="cgroups: cgroup deleted: unknown" Jan 30 14:13:37.991740 containerd[1828]: time="2025-01-30T14:13:37.991484203Z" level=info msg="shim disconnected" id=16007ac1c5384b984be86619a295c40e12486d1e50d4b21586d3f7ee9af5368d namespace=k8s.io Jan 30 14:13:37.991740 containerd[1828]: time="2025-01-30T14:13:37.991560123Z" level=warning msg="cleaning up after shim disconnected" id=16007ac1c5384b984be86619a295c40e12486d1e50d4b21586d3f7ee9af5368d namespace=k8s.io Jan 30 14:13:37.991740 containerd[1828]: time="2025-01-30T14:13:37.991569003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:38.085528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16007ac1c5384b984be86619a295c40e12486d1e50d4b21586d3f7ee9af5368d-rootfs.mount: Deactivated successfully. Jan 30 14:13:38.542791 kubelet[3543]: E0130 14:13:38.542692 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:39.522376 containerd[1828]: time="2025-01-30T14:13:39.522315996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.568089 containerd[1828]: time="2025-01-30T14:13:39.568036329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 30 14:13:39.615081 containerd[1828]: time="2025-01-30T14:13:39.614992621Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.619951 containerd[1828]: time="2025-01-30T14:13:39.619866018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.620907 containerd[1828]: time="2025-01-30T14:13:39.620734417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.364166796s" Jan 30 14:13:39.620907 containerd[1828]: time="2025-01-30T14:13:39.620792457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 14:13:39.624297 containerd[1828]: time="2025-01-30T14:13:39.622817616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:13:39.633857 containerd[1828]: time="2025-01-30T14:13:39.633816890Z" level=info msg="CreateContainer within sandbox \"6ea7efa3ed2b547f737f5afd4ec6b3d246d5c1af9b486441fe0ccface8e48e18\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:13:40.013721 containerd[1828]: time="2025-01-30T14:13:40.013650062Z" level=info msg="CreateContainer within sandbox \"6ea7efa3ed2b547f737f5afd4ec6b3d246d5c1af9b486441fe0ccface8e48e18\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62e8234646294ac8b05da7b57ab81229a1c03a77258a57a75e6bf249799c3f0e\"" Jan 30 14:13:40.014532 containerd[1828]: time="2025-01-30T14:13:40.014428422Z" level=info msg="StartContainer for \"62e8234646294ac8b05da7b57ab81229a1c03a77258a57a75e6bf249799c3f0e\"" Jan 30 14:13:40.124847 containerd[1828]: time="2025-01-30T14:13:40.124674396Z" level=info msg="StartContainer for \"62e8234646294ac8b05da7b57ab81229a1c03a77258a57a75e6bf249799c3f0e\" returns successfully" Jan 30 14:13:40.543507 kubelet[3543]: E0130 14:13:40.542440 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:40.703794 kubelet[3543]: I0130 14:13:40.703373 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59f578796d-9pq9m" podStartSLOduration=2.113045923 podStartE2EDuration="10.703355369s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="2025-01-30 14:13:31.031556971 +0000 UTC m=+23.593926148" lastFinishedPulling="2025-01-30 14:13:39.621866417 +0000 UTC m=+32.184235594" observedRunningTime="2025-01-30 14:13:40.703166329 +0000 UTC m=+33.265535546" watchObservedRunningTime="2025-01-30 14:13:40.703355369 +0000 UTC m=+33.265724546" Jan 30 14:13:42.541992 kubelet[3543]: E0130 14:13:42.541924 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:44.541904 kubelet[3543]: E0130 14:13:44.541825 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:46.543423 kubelet[3543]: E0130 14:13:46.542923 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:46.814589 containerd[1828]: time="2025-01-30T14:13:46.814463357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:46.818241 containerd[1828]: time="2025-01-30T14:13:46.818086355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 14:13:46.862179 containerd[1828]: time="2025-01-30T14:13:46.862108417Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:46.908240 containerd[1828]: time="2025-01-30T14:13:46.908072718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:46.909786 containerd[1828]: time="2025-01-30T14:13:46.909296237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 7.286428141s" Jan 30 14:13:46.909786 containerd[1828]: time="2025-01-30T14:13:46.909333837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 14:13:46.914103 containerd[1828]: time="2025-01-30T14:13:46.913962835Z" level=info msg="CreateContainer within sandbox \"4d089dd922bba9fe83e123864e4485c1bd87d2bc1ecd588b6b681338a43b0642\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:13:47.208586 containerd[1828]: time="2025-01-30T14:13:47.208414713Z" level=info msg="CreateContainer within sandbox \"4d089dd922bba9fe83e123864e4485c1bd87d2bc1ecd588b6b681338a43b0642\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"527b3f7b49ffeb13cb5528e551ded4189f34a8e7f8719098da0c1b137b34d632\"" Jan 30 14:13:47.210811 containerd[1828]: time="2025-01-30T14:13:47.209417553Z" level=info msg="StartContainer for \"527b3f7b49ffeb13cb5528e551ded4189f34a8e7f8719098da0c1b137b34d632\"" Jan 30 14:13:47.269178 containerd[1828]: time="2025-01-30T14:13:47.269135568Z" level=info msg="StartContainer for \"527b3f7b49ffeb13cb5528e551ded4189f34a8e7f8719098da0c1b137b34d632\" returns successfully" Jan 30 14:13:48.541966 kubelet[3543]: E0130 14:13:48.541910 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:50.542311 kubelet[3543]: E0130 14:13:50.542259 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:52.542463 kubelet[3543]: E0130 14:13:52.542400 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:54.542263 kubelet[3543]: E0130 14:13:54.542211 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:56.661392 kubelet[3543]: E0130 14:13:56.541822 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:57.575140 containerd[1828]: time="2025-01-30T14:13:57.574998502Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:13:57.597818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-527b3f7b49ffeb13cb5528e551ded4189f34a8e7f8719098da0c1b137b34d632-rootfs.mount: Deactivated successfully. Jan 30 14:13:57.624331 kubelet[3543]: I0130 14:13:57.624271 3543 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:13:57.630055 containerd[1828]: time="2025-01-30T14:13:57.629726204Z" level=info msg="shim disconnected" id=527b3f7b49ffeb13cb5528e551ded4189f34a8e7f8719098da0c1b137b34d632 namespace=k8s.io Jan 30 14:13:57.630055 containerd[1828]: time="2025-01-30T14:13:57.629807644Z" level=warning msg="cleaning up after shim disconnected" id=527b3f7b49ffeb13cb5528e551ded4189f34a8e7f8719098da0c1b137b34d632 namespace=k8s.io Jan 30 14:13:57.630055 containerd[1828]: time="2025-01-30T14:13:57.629816364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:57.658516 kubelet[3543]: I0130 14:13:57.658474 3543 topology_manager.go:215] "Topology Admit Handler" podUID="01dd86c1-a36d-4981-a399-a0bafb12e0de" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vv6cm" Jan 30 14:13:57.666858 kubelet[3543]: I0130 14:13:57.666655 3543 topology_manager.go:215] "Topology Admit Handler" podUID="890e2792-22b0-41bc-a56a-9ffff22368a2" podNamespace="calico-system" podName="calico-kube-controllers-546b67bdf9-xj859" Jan 30 14:13:57.672632 kubelet[3543]: I0130 14:13:57.671781 3543 topology_manager.go:215] "Topology Admit Handler" podUID="76abe81d-4667-42d1-9922-dae522fdac2f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fwz54" Jan 30 14:13:57.672632 kubelet[3543]: I0130 14:13:57.672150 3543 topology_manager.go:215] "Topology Admit Handler" podUID="05962a30-7d91-453e-aaf3-1d47860b0219" podNamespace="calico-apiserver" podName="calico-apiserver-69bb947986-b6l82" Jan 30 14:13:57.672632 kubelet[3543]: I0130 14:13:57.672513 3543 topology_manager.go:215] "Topology Admit Handler" podUID="d905625a-9071-4d1a-9572-328141962bc1" podNamespace="calico-apiserver" podName="calico-apiserver-69bb947986-4c92w" Jan 30 14:13:57.733184 containerd[1828]: time="2025-01-30T14:13:57.732811170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:13:57.801593 kubelet[3543]: I0130 14:13:57.800923 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8mz2\" (UniqueName: \"kubernetes.io/projected/05962a30-7d91-453e-aaf3-1d47860b0219-kube-api-access-p8mz2\") pod \"calico-apiserver-69bb947986-b6l82\" (UID: \"05962a30-7d91-453e-aaf3-1d47860b0219\") " pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" Jan 30 14:13:57.801593 kubelet[3543]: I0130 14:13:57.800969 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258hc\" (UniqueName: \"kubernetes.io/projected/76abe81d-4667-42d1-9922-dae522fdac2f-kube-api-access-258hc\") pod \"coredns-7db6d8ff4d-fwz54\" (UID: \"76abe81d-4667-42d1-9922-dae522fdac2f\") " pod="kube-system/coredns-7db6d8ff4d-fwz54" Jan 30 14:13:57.801593 kubelet[3543]: I0130 14:13:57.800990 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/05962a30-7d91-453e-aaf3-1d47860b0219-calico-apiserver-certs\") pod \"calico-apiserver-69bb947986-b6l82\" (UID: \"05962a30-7d91-453e-aaf3-1d47860b0219\") " pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" Jan 30 14:13:57.801593 kubelet[3543]: I0130 14:13:57.801011 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hct\" (UniqueName: \"kubernetes.io/projected/d905625a-9071-4d1a-9572-328141962bc1-kube-api-access-t6hct\") pod \"calico-apiserver-69bb947986-4c92w\" (UID: \"d905625a-9071-4d1a-9572-328141962bc1\") " pod="calico-apiserver/calico-apiserver-69bb947986-4c92w" Jan 30 14:13:57.801593 kubelet[3543]: I0130 14:13:57.801031 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2nt\" (UniqueName: \"kubernetes.io/projected/01dd86c1-a36d-4981-a399-a0bafb12e0de-kube-api-access-zx2nt\") pod \"coredns-7db6d8ff4d-vv6cm\" (UID: \"01dd86c1-a36d-4981-a399-a0bafb12e0de\") " pod="kube-system/coredns-7db6d8ff4d-vv6cm" Jan 30 14:13:57.801875 kubelet[3543]: I0130 14:13:57.801046 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d905625a-9071-4d1a-9572-328141962bc1-calico-apiserver-certs\") pod \"calico-apiserver-69bb947986-4c92w\" (UID: \"d905625a-9071-4d1a-9572-328141962bc1\") " pod="calico-apiserver/calico-apiserver-69bb947986-4c92w" Jan 30 14:13:57.801875 kubelet[3543]: I0130 14:13:57.801070 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/890e2792-22b0-41bc-a56a-9ffff22368a2-tigera-ca-bundle\") pod \"calico-kube-controllers-546b67bdf9-xj859\" (UID: \"890e2792-22b0-41bc-a56a-9ffff22368a2\") " pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" Jan 30 14:13:57.801875 kubelet[3543]: I0130 14:13:57.801086 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2qb\" (UniqueName: \"kubernetes.io/projected/890e2792-22b0-41bc-a56a-9ffff22368a2-kube-api-access-nc2qb\") pod \"calico-kube-controllers-546b67bdf9-xj859\" (UID: \"890e2792-22b0-41bc-a56a-9ffff22368a2\") " pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" Jan 30 14:13:57.801875 kubelet[3543]: I0130 14:13:57.801113 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01dd86c1-a36d-4981-a399-a0bafb12e0de-config-volume\") pod \"coredns-7db6d8ff4d-vv6cm\" (UID: \"01dd86c1-a36d-4981-a399-a0bafb12e0de\") " pod="kube-system/coredns-7db6d8ff4d-vv6cm" Jan 30 14:13:57.801875 kubelet[3543]: I0130 14:13:57.801129 3543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76abe81d-4667-42d1-9922-dae522fdac2f-config-volume\") pod \"coredns-7db6d8ff4d-fwz54\" (UID: \"76abe81d-4667-42d1-9922-dae522fdac2f\") " pod="kube-system/coredns-7db6d8ff4d-fwz54" Jan 30 14:13:57.975118 containerd[1828]: time="2025-01-30T14:13:57.974582490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vv6cm,Uid:01dd86c1-a36d-4981-a399-a0bafb12e0de,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:57.979290 containerd[1828]: time="2025-01-30T14:13:57.979028649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-4c92w,Uid:d905625a-9071-4d1a-9572-328141962bc1,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:13:57.988717 containerd[1828]: time="2025-01-30T14:13:57.988488246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-b6l82,Uid:05962a30-7d91-453e-aaf3-1d47860b0219,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:13:57.990069 containerd[1828]: time="2025-01-30T14:13:57.989487925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b67bdf9-xj859,Uid:890e2792-22b0-41bc-a56a-9ffff22368a2,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:57.990069 containerd[1828]: time="2025-01-30T14:13:57.989514405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fwz54,Uid:76abe81d-4667-42d1-9922-dae522fdac2f,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:58.302977 containerd[1828]: time="2025-01-30T14:13:58.302929022Z" level=error msg="Failed to destroy network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.306059 containerd[1828]: time="2025-01-30T14:13:58.306007821Z" level=error msg="encountered an error cleaning up failed sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.306540 containerd[1828]: time="2025-01-30T14:13:58.306408101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vv6cm,Uid:01dd86c1-a36d-4981-a399-a0bafb12e0de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.306958 kubelet[3543]: E0130 14:13:58.306857 3543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.306958 kubelet[3543]: E0130 14:13:58.306945 3543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vv6cm" Jan 30 14:13:58.307079 kubelet[3543]: E0130 14:13:58.306966 3543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vv6cm" Jan 30 14:13:58.307079 kubelet[3543]: E0130 14:13:58.307011 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vv6cm_kube-system(01dd86c1-a36d-4981-a399-a0bafb12e0de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vv6cm_kube-system(01dd86c1-a36d-4981-a399-a0bafb12e0de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vv6cm" podUID="01dd86c1-a36d-4981-a399-a0bafb12e0de" Jan 30 14:13:58.325736 containerd[1828]: time="2025-01-30T14:13:58.325449735Z" level=error msg="Failed to destroy network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.326214 containerd[1828]: time="2025-01-30T14:13:58.326168814Z" level=error msg="encountered an error cleaning up failed sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.327447 containerd[1828]: time="2025-01-30T14:13:58.326829414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b67bdf9-xj859,Uid:890e2792-22b0-41bc-a56a-9ffff22368a2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.327607 kubelet[3543]: E0130 14:13:58.327064 3543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.327607 kubelet[3543]: E0130 14:13:58.327127 3543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" Jan 30 14:13:58.327607 kubelet[3543]: E0130 14:13:58.327149 3543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" Jan 30 14:13:58.327701 kubelet[3543]: E0130 14:13:58.327188 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-546b67bdf9-xj859_calico-system(890e2792-22b0-41bc-a56a-9ffff22368a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-546b67bdf9-xj859_calico-system(890e2792-22b0-41bc-a56a-9ffff22368a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" podUID="890e2792-22b0-41bc-a56a-9ffff22368a2" Jan 30 14:13:58.343791 containerd[1828]: time="2025-01-30T14:13:58.343443329Z" level=error msg="Failed to destroy network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.343791 containerd[1828]: time="2025-01-30T14:13:58.343775849Z" level=error msg="encountered an error cleaning up failed sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.344039 containerd[1828]: time="2025-01-30T14:13:58.343822009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fwz54,Uid:76abe81d-4667-42d1-9922-dae522fdac2f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.344194 kubelet[3543]: E0130 14:13:58.344144 3543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.344344 kubelet[3543]: E0130 14:13:58.344203 3543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fwz54" Jan 30 14:13:58.344344 kubelet[3543]: E0130 14:13:58.344225 3543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fwz54" Jan 30 14:13:58.344344 kubelet[3543]: E0130 14:13:58.344266 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fwz54_kube-system(76abe81d-4667-42d1-9922-dae522fdac2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fwz54_kube-system(76abe81d-4667-42d1-9922-dae522fdac2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fwz54" podUID="76abe81d-4667-42d1-9922-dae522fdac2f" Jan 30 14:13:58.349956 containerd[1828]: time="2025-01-30T14:13:58.349886247Z" level=error msg="Failed to destroy network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.351945 containerd[1828]: time="2025-01-30T14:13:58.351885166Z" level=error msg="encountered an error cleaning up failed sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.352038 containerd[1828]: time="2025-01-30T14:13:58.351971726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-b6l82,Uid:05962a30-7d91-453e-aaf3-1d47860b0219,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.352361 kubelet[3543]: E0130 14:13:58.352226 3543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.352430 kubelet[3543]: E0130 14:13:58.352386 3543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" Jan 30 14:13:58.352524 kubelet[3543]: E0130 14:13:58.352498 3543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" Jan 30 14:13:58.352745 kubelet[3543]: E0130 14:13:58.352553 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69bb947986-b6l82_calico-apiserver(05962a30-7d91-453e-aaf3-1d47860b0219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69bb947986-b6l82_calico-apiserver(05962a30-7d91-453e-aaf3-1d47860b0219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" podUID="05962a30-7d91-453e-aaf3-1d47860b0219" Jan 30 14:13:58.357595 containerd[1828]: time="2025-01-30T14:13:58.357548004Z" level=error msg="Failed to destroy network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.357910 containerd[1828]: time="2025-01-30T14:13:58.357873964Z" level=error msg="encountered an error cleaning up failed sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.357950 containerd[1828]: time="2025-01-30T14:13:58.357926604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-4c92w,Uid:d905625a-9071-4d1a-9572-328141962bc1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.358204 kubelet[3543]: E0130 14:13:58.358163 3543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.358290 kubelet[3543]: E0130 14:13:58.358230 3543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bb947986-4c92w" Jan 30 14:13:58.358290 kubelet[3543]: E0130 14:13:58.358250 3543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bb947986-4c92w" Jan 30 14:13:58.358339 kubelet[3543]: E0130 14:13:58.358298 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69bb947986-4c92w_calico-apiserver(d905625a-9071-4d1a-9572-328141962bc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69bb947986-4c92w_calico-apiserver(d905625a-9071-4d1a-9572-328141962bc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bb947986-4c92w" podUID="d905625a-9071-4d1a-9572-328141962bc1" Jan 30 14:13:58.545295 containerd[1828]: time="2025-01-30T14:13:58.544951662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgkms,Uid:41b34c4a-b7b3-49a0-aec8-339d6c10a9dc,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:58.599973 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25-shm.mount: Deactivated successfully. Jan 30 14:13:58.663795 containerd[1828]: time="2025-01-30T14:13:58.662713063Z" level=error msg="Failed to destroy network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.663795 containerd[1828]: time="2025-01-30T14:13:58.663111783Z" level=error msg="encountered an error cleaning up failed sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.663795 containerd[1828]: time="2025-01-30T14:13:58.663168063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgkms,Uid:41b34c4a-b7b3-49a0-aec8-339d6c10a9dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.665643 kubelet[3543]: E0130 14:13:58.665577 3543 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.665830 kubelet[3543]: E0130 14:13:58.665651 3543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mgkms" Jan 30 14:13:58.665830 kubelet[3543]: E0130 14:13:58.665672 3543 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mgkms" Jan 30 14:13:58.665830 kubelet[3543]: E0130 14:13:58.665716 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mgkms_calico-system(41b34c4a-b7b3-49a0-aec8-339d6c10a9dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mgkms_calico-system(41b34c4a-b7b3-49a0-aec8-339d6c10a9dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:58.667325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4-shm.mount: Deactivated successfully. Jan 30 14:13:58.732972 kubelet[3543]: I0130 14:13:58.732936 3543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:13:58.734512 containerd[1828]: time="2025-01-30T14:13:58.733810840Z" level=info msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\"" Jan 30 14:13:58.735339 containerd[1828]: time="2025-01-30T14:13:58.735099800Z" level=info msg="Ensure that sandbox 94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25 in task-service has been cleanup successfully" Jan 30 14:13:58.737006 kubelet[3543]: I0130 14:13:58.736975 3543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:13:58.738350 containerd[1828]: time="2025-01-30T14:13:58.738283918Z" level=info msg="StopPodSandbox for \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\"" Jan 30 14:13:58.738515 containerd[1828]: time="2025-01-30T14:13:58.738489158Z" level=info msg="Ensure that sandbox 1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4 in task-service has been cleanup successfully" Jan 30 14:13:58.759294 kubelet[3543]: I0130 14:13:58.759201 3543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:13:58.761263 containerd[1828]: time="2025-01-30T14:13:58.760885591Z" level=info msg="StopPodSandbox for \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\"" Jan 30 14:13:58.763211 containerd[1828]: time="2025-01-30T14:13:58.761422551Z" level=info msg="Ensure that sandbox ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7 in task-service has been cleanup successfully" Jan 30 14:13:58.768005 kubelet[3543]: I0130 14:13:58.767390 3543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:13:58.768862 containerd[1828]: time="2025-01-30T14:13:58.768710548Z" level=info msg="StopPodSandbox for \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\"" Jan 30 14:13:58.769866 containerd[1828]: time="2025-01-30T14:13:58.769834628Z" level=info msg="Ensure that sandbox 59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd in task-service has been cleanup successfully" Jan 30 14:13:58.770355 kubelet[3543]: I0130 14:13:58.770308 3543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:13:58.773412 containerd[1828]: time="2025-01-30T14:13:58.773373227Z" level=info msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\"" Jan 30 14:13:58.776246 containerd[1828]: time="2025-01-30T14:13:58.776206426Z" level=info msg="Ensure that sandbox b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00 in task-service has been cleanup successfully" Jan 30 14:13:58.784279 kubelet[3543]: I0130 14:13:58.782104 3543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:13:58.784417 containerd[1828]: time="2025-01-30T14:13:58.783783023Z" level=info msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\"" Jan 30 14:13:58.785062 containerd[1828]: time="2025-01-30T14:13:58.784589663Z" level=info msg="Ensure that sandbox 1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7 in task-service has been cleanup successfully" Jan 30 14:13:58.873147 containerd[1828]: time="2025-01-30T14:13:58.873000794Z" level=error msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" failed" error="failed to destroy network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.873919 kubelet[3543]: E0130 14:13:58.873878 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:13:58.874038 kubelet[3543]: E0130 14:13:58.873941 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25"} Jan 30 14:13:58.874038 kubelet[3543]: E0130 14:13:58.873999 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01dd86c1-a36d-4981-a399-a0bafb12e0de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:58.874038 kubelet[3543]: E0130 14:13:58.874019 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01dd86c1-a36d-4981-a399-a0bafb12e0de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vv6cm" podUID="01dd86c1-a36d-4981-a399-a0bafb12e0de" Jan 30 14:13:58.877302 containerd[1828]: time="2025-01-30T14:13:58.876916553Z" level=error msg="StopPodSandbox for \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\" failed" error="failed to destroy network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.877433 kubelet[3543]: E0130 14:13:58.877150 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:13:58.877433 kubelet[3543]: E0130 14:13:58.877197 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd"} Jan 30 14:13:58.877433 kubelet[3543]: E0130 14:13:58.877236 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76abe81d-4667-42d1-9922-dae522fdac2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:58.877433 kubelet[3543]: E0130 14:13:58.877257 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76abe81d-4667-42d1-9922-dae522fdac2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fwz54" podUID="76abe81d-4667-42d1-9922-dae522fdac2f" Jan 30 14:13:58.880524 containerd[1828]: time="2025-01-30T14:13:58.879861192Z" level=error msg="StopPodSandbox for \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\" failed" error="failed to destroy network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.880524 containerd[1828]: time="2025-01-30T14:13:58.880015992Z" level=error msg="StopPodSandbox for \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\" failed" error="failed to destroy network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.880676 kubelet[3543]: E0130 14:13:58.880236 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:13:58.880676 kubelet[3543]: E0130 14:13:58.880296 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4"} Jan 30 14:13:58.880676 kubelet[3543]: E0130 14:13:58.880330 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:58.880676 kubelet[3543]: E0130 14:13:58.880352 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mgkms" podUID="41b34c4a-b7b3-49a0-aec8-339d6c10a9dc" Jan 30 14:13:58.881070 kubelet[3543]: E0130 14:13:58.880401 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:13:58.881070 kubelet[3543]: E0130 14:13:58.880435 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7"} Jan 30 14:13:58.881070 kubelet[3543]: E0130 14:13:58.880471 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d905625a-9071-4d1a-9572-328141962bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:58.881070 kubelet[3543]: E0130 14:13:58.880490 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d905625a-9071-4d1a-9572-328141962bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bb947986-4c92w" podUID="d905625a-9071-4d1a-9572-328141962bc1" Jan 30 14:13:58.890306 containerd[1828]: time="2025-01-30T14:13:58.889857029Z" level=error msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" failed" error="failed to destroy network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.890553 kubelet[3543]: E0130 14:13:58.890247 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:13:58.890553 kubelet[3543]: E0130 14:13:58.890293 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00"} Jan 30 14:13:58.890553 kubelet[3543]: E0130 14:13:58.890342 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"890e2792-22b0-41bc-a56a-9ffff22368a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:58.890553 kubelet[3543]: E0130 14:13:58.890367 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"890e2792-22b0-41bc-a56a-9ffff22368a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" podUID="890e2792-22b0-41bc-a56a-9ffff22368a2" Jan 30 14:13:58.894496 containerd[1828]: time="2025-01-30T14:13:58.894338507Z" level=error msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" failed" error="failed to destroy network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:58.894649 kubelet[3543]: E0130 14:13:58.894595 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:13:58.894721 kubelet[3543]: E0130 14:13:58.894653 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7"} Jan 30 14:13:58.894721 kubelet[3543]: E0130 14:13:58.894693 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05962a30-7d91-453e-aaf3-1d47860b0219\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:58.894827 kubelet[3543]: E0130 14:13:58.894714 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05962a30-7d91-453e-aaf3-1d47860b0219\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" podUID="05962a30-7d91-453e-aaf3-1d47860b0219" Jan 30 14:14:05.204440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949965125.mount: Deactivated successfully. Jan 30 14:14:10.544271 containerd[1828]: time="2025-01-30T14:14:10.544221952Z" level=info msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\"" Jan 30 14:14:10.868048 containerd[1828]: time="2025-01-30T14:14:10.759988916Z" level=error msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" failed" error="failed to destroy network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:14:10.868183 kubelet[3543]: E0130 14:14:10.760208 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:14:10.868183 kubelet[3543]: E0130 14:14:10.760262 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7"} Jan 30 14:14:10.868183 kubelet[3543]: E0130 14:14:10.760296 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05962a30-7d91-453e-aaf3-1d47860b0219\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:14:10.868183 kubelet[3543]: E0130 14:14:10.760316 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05962a30-7d91-453e-aaf3-1d47860b0219\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" podUID="05962a30-7d91-453e-aaf3-1d47860b0219" Jan 30 14:14:11.175723 containerd[1828]: time="2025-01-30T14:14:11.175421229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:11.212582 containerd[1828]: time="2025-01-30T14:14:11.212524535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 14:14:11.260107 containerd[1828]: time="2025-01-30T14:14:11.260031546Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:11.307108 containerd[1828]: time="2025-01-30T14:14:11.307039997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:11.307971 containerd[1828]: time="2025-01-30T14:14:11.307800716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 13.574816746s" Jan 30 14:14:11.307971 containerd[1828]: time="2025-01-30T14:14:11.307846636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 14:14:11.324839 containerd[1828]: time="2025-01-30T14:14:11.324727691Z" level=info msg="CreateContainer within sandbox \"4d089dd922bba9fe83e123864e4485c1bd87d2bc1ecd588b6b681338a43b0642\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:14:11.521145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457322956.mount: Deactivated successfully. Jan 30 14:14:11.545200 containerd[1828]: time="2025-01-30T14:14:11.545150249Z" level=info msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\"" Jan 30 14:14:11.545608 containerd[1828]: time="2025-01-30T14:14:11.545514769Z" level=info msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\"" Jan 30 14:14:11.581170 containerd[1828]: time="2025-01-30T14:14:11.581120237Z" level=error msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" failed" error="failed to destroy network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:14:11.581565 kubelet[3543]: E0130 14:14:11.581520 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:14:11.581682 kubelet[3543]: E0130 14:14:11.581576 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25"} Jan 30 14:14:11.581682 kubelet[3543]: E0130 14:14:11.581610 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01dd86c1-a36d-4981-a399-a0bafb12e0de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:14:11.581682 kubelet[3543]: E0130 14:14:11.581632 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01dd86c1-a36d-4981-a399-a0bafb12e0de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vv6cm" podUID="01dd86c1-a36d-4981-a399-a0bafb12e0de" Jan 30 14:14:11.585301 containerd[1828]: time="2025-01-30T14:14:11.585236231Z" level=error msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" failed" error="failed to destroy network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:14:11.585554 kubelet[3543]: E0130 14:14:11.585505 3543 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:14:11.585614 kubelet[3543]: E0130 14:14:11.585566 3543 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00"} Jan 30 14:14:11.585639 kubelet[3543]: E0130 14:14:11.585609 3543 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"890e2792-22b0-41bc-a56a-9ffff22368a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:14:11.585694 kubelet[3543]: E0130 14:14:11.585633 3543 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"890e2792-22b0-41bc-a56a-9ffff22368a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" podUID="890e2792-22b0-41bc-a56a-9ffff22368a2" Jan 30 14:14:11.672974 containerd[1828]: time="2025-01-30T14:14:11.672916183Z" level=info msg="CreateContainer within sandbox \"4d089dd922bba9fe83e123864e4485c1bd87d2bc1ecd588b6b681338a43b0642\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cdadc4b7608ba1b0678c1be97cacd333999dc8d161c87880a3fb945f4a2b5c04\"" Jan 30 14:14:11.675194 containerd[1828]: time="2025-01-30T14:14:11.675011860Z" level=info msg="StartContainer for \"cdadc4b7608ba1b0678c1be97cacd333999dc8d161c87880a3fb945f4a2b5c04\"" Jan 30 14:14:11.737239 containerd[1828]: time="2025-01-30T14:14:11.737183609Z" level=info msg="StartContainer for \"cdadc4b7608ba1b0678c1be97cacd333999dc8d161c87880a3fb945f4a2b5c04\" returns successfully" Jan 30 14:14:11.845231 kubelet[3543]: I0130 14:14:11.844995 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l6nqx" podStartSLOduration=1.378220361 podStartE2EDuration="41.844959371s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="2025-01-30 14:13:30.842237544 +0000 UTC m=+23.404606681" lastFinishedPulling="2025-01-30 14:14:11.308976514 +0000 UTC m=+63.871345691" observedRunningTime="2025-01-30 14:14:11.844025013 +0000 UTC m=+64.406394190" watchObservedRunningTime="2025-01-30 14:14:11.844959371 +0000 UTC m=+64.407328628" Jan 30 14:14:11.961043 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:14:11.961164 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:14:12.543652 containerd[1828]: time="2025-01-30T14:14:12.543545671Z" level=info msg="StopPodSandbox for \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\"" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.608 [INFO][4775] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.608 [INFO][4775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" iface="eth0" netns="/var/run/netns/cni-1cd002c7-9831-8ddb-ff80-ae9d0ed5b1d5" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.608 [INFO][4775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" iface="eth0" netns="/var/run/netns/cni-1cd002c7-9831-8ddb-ff80-ae9d0ed5b1d5" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.608 [INFO][4775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" iface="eth0" netns="/var/run/netns/cni-1cd002c7-9831-8ddb-ff80-ae9d0ed5b1d5" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.608 [INFO][4775] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.608 [INFO][4775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.629 [INFO][4781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.629 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.629 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.637 [WARNING][4781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.637 [INFO][4781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.639 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:12.643439 containerd[1828]: 2025-01-30 14:14:12.641 [INFO][4775] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:14:12.645638 containerd[1828]: time="2025-01-30T14:14:12.643853924Z" level=info msg="TearDown network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\" successfully" Jan 30 14:14:12.645638 containerd[1828]: time="2025-01-30T14:14:12.643889364Z" level=info msg="StopPodSandbox for \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\" returns successfully" Jan 30 14:14:12.648222 systemd[1]: run-netns-cni\x2d1cd002c7\x2d9831\x2d8ddb\x2dff80\x2dae9d0ed5b1d5.mount: Deactivated successfully. Jan 30 14:14:12.653777 containerd[1828]: time="2025-01-30T14:14:12.653390470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-4c92w,Uid:d905625a-9071-4d1a-9572-328141962bc1,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:14:13.038725 systemd-networkd[1394]: cali28268635b69: Link UP Jan 30 14:14:13.040556 systemd-networkd[1394]: cali28268635b69: Gained carrier Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.932 [INFO][4810] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.947 [INFO][4810] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0 calico-apiserver-69bb947986- calico-apiserver d905625a-9071-4d1a-9572-328141962bc1 829 0 2025-01-30 14:13:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69bb947986 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-eeb23789ea calico-apiserver-69bb947986-4c92w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali28268635b69 [] []}} ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.947 [INFO][4810] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.975 [INFO][4821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" HandleID="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.987 [INFO][4821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" HandleID="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028eb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-eeb23789ea", "pod":"calico-apiserver-69bb947986-4c92w", "timestamp":"2025-01-30 14:14:12.97509332 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eeb23789ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.987 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.987 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.987 [INFO][4821] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eeb23789ea' Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.989 [INFO][4821] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.993 [INFO][4821] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.997 [INFO][4821] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:12.999 [INFO][4821] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.001 [INFO][4821] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.001 [INFO][4821] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.003 [INFO][4821] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1 Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.008 [INFO][4821] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.015 [INFO][4821] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.193/26] block=192.168.69.192/26 handle="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.015 [INFO][4821] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.193/26] handle="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.016 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:13.058517 containerd[1828]: 2025-01-30 14:14:13.016 [INFO][4821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.193/26] IPv6=[] ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" HandleID="k8s-pod-network.f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:13.059270 containerd[1828]: 2025-01-30 14:14:13.018 [INFO][4810] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"d905625a-9071-4d1a-9572-328141962bc1", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"", Pod:"calico-apiserver-69bb947986-4c92w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28268635b69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:13.059270 containerd[1828]: 2025-01-30 14:14:13.018 [INFO][4810] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.193/32] ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:13.059270 containerd[1828]: 2025-01-30 14:14:13.018 [INFO][4810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28268635b69 ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:13.059270 containerd[1828]: 2025-01-30 14:14:13.038 [INFO][4810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:13.059270 containerd[1828]: 2025-01-30 14:14:13.039 [INFO][4810] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"d905625a-9071-4d1a-9572-328141962bc1", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1", Pod:"calico-apiserver-69bb947986-4c92w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28268635b69", MAC:"9a:18:ad:7d:8c:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:13.059270 containerd[1828]: 2025-01-30 14:14:13.056 [INFO][4810] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-4c92w" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:14:13.085348 containerd[1828]: time="2025-01-30T14:14:13.081374045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:13.085348 containerd[1828]: time="2025-01-30T14:14:13.081452845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:13.085348 containerd[1828]: time="2025-01-30T14:14:13.081468005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:13.085348 containerd[1828]: time="2025-01-30T14:14:13.081569325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:13.129867 containerd[1828]: time="2025-01-30T14:14:13.129814734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-4c92w,Uid:d905625a-9071-4d1a-9572-328141962bc1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1\"" Jan 30 14:14:13.134623 containerd[1828]: time="2025-01-30T14:14:13.134294008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:14:13.543135 containerd[1828]: time="2025-01-30T14:14:13.542804011Z" level=info msg="StopPodSandbox for \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\"" Jan 30 14:14:13.545535 containerd[1828]: time="2025-01-30T14:14:13.544315208Z" level=info msg="StopPodSandbox for \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\"" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.611 [INFO][4959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.613 [INFO][4959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" iface="eth0" netns="/var/run/netns/cni-a879367c-0468-8abf-3565-6dec8cf08773" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.613 [INFO][4959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" iface="eth0" netns="/var/run/netns/cni-a879367c-0468-8abf-3565-6dec8cf08773" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.614 [INFO][4959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" iface="eth0" netns="/var/run/netns/cni-a879367c-0468-8abf-3565-6dec8cf08773" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.614 [INFO][4959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.614 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.641 [INFO][4970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.642 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.642 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.654 [WARNING][4970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.654 [INFO][4970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.656 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:13.659995 containerd[1828]: 2025-01-30 14:14:13.658 [INFO][4959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:14:13.660629 containerd[1828]: time="2025-01-30T14:14:13.660480959Z" level=info msg="TearDown network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\" successfully" Jan 30 14:14:13.660629 containerd[1828]: time="2025-01-30T14:14:13.660521559Z" level=info msg="StopPodSandbox for \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\" returns successfully" Jan 30 14:14:13.664400 containerd[1828]: time="2025-01-30T14:14:13.664351633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fwz54,Uid:76abe81d-4667-42d1-9922-dae522fdac2f,Namespace:kube-system,Attempt:1,}" Jan 30 14:14:13.664749 systemd[1]: run-netns-cni\x2da879367c\x2d0468\x2d8abf\x2d3565\x2d6dec8cf08773.mount: Deactivated successfully. Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.618 [INFO][4958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.619 [INFO][4958] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" iface="eth0" netns="/var/run/netns/cni-50645c14-3bed-d19c-1679-ebe24af8232f" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.619 [INFO][4958] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" iface="eth0" netns="/var/run/netns/cni-50645c14-3bed-d19c-1679-ebe24af8232f" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.619 [INFO][4958] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" iface="eth0" netns="/var/run/netns/cni-50645c14-3bed-d19c-1679-ebe24af8232f" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.620 [INFO][4958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.620 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.658 [INFO][4974] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.659 [INFO][4974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.659 [INFO][4974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.670 [WARNING][4974] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.670 [INFO][4974] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.671 [INFO][4974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:13.674864 containerd[1828]: 2025-01-30 14:14:13.673 [INFO][4958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:14:13.675894 containerd[1828]: time="2025-01-30T14:14:13.675434097Z" level=info msg="TearDown network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\" successfully" Jan 30 14:14:13.675894 containerd[1828]: time="2025-01-30T14:14:13.675464737Z" level=info msg="StopPodSandbox for \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\" returns successfully" Jan 30 14:14:13.677064 containerd[1828]: time="2025-01-30T14:14:13.677025655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgkms,Uid:41b34c4a-b7b3-49a0-aec8-339d6c10a9dc,Namespace:calico-system,Attempt:1,}" Jan 30 14:14:13.679020 systemd[1]: run-netns-cni\x2d50645c14\x2d3bed\x2dd19c\x2d1679\x2debe24af8232f.mount: Deactivated successfully. Jan 30 14:14:14.447791 kernel: bpftool[5012]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:14:15.060079 systemd-networkd[1394]: cali28268635b69: Gained IPv6LL Jan 30 14:14:18.482708 systemd-networkd[1394]: vxlan.calico: Link UP Jan 30 14:14:18.482715 systemd-networkd[1394]: vxlan.calico: Gained carrier Jan 30 14:14:19.803363 systemd-networkd[1394]: calic8d8375593a: Link UP Jan 30 14:14:19.804088 systemd-networkd[1394]: calic8d8375593a: Gained carrier Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.696 [INFO][5126] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0 csi-node-driver- calico-system 41b34c4a-b7b3-49a0-aec8-339d6c10a9dc 841 0 2025-01-30 14:13:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-eeb23789ea csi-node-driver-mgkms eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic8d8375593a [] []}} ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.696 [INFO][5126] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.734 [INFO][5149] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" HandleID="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.749 [INFO][5149] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" HandleID="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003167f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-eeb23789ea", "pod":"csi-node-driver-mgkms", "timestamp":"2025-01-30 14:14:19.734866826 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eeb23789ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.749 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.749 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.749 [INFO][5149] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eeb23789ea' Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.752 [INFO][5149] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.756 [INFO][5149] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.766 [INFO][5149] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.771 [INFO][5149] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.775 [INFO][5149] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.775 [INFO][5149] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.777 [INFO][5149] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0 Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.782 [INFO][5149] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.793 [INFO][5149] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.194/26] block=192.168.69.192/26 handle="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.793 [INFO][5149] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.194/26] handle="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.793 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:19.846132 containerd[1828]: 2025-01-30 14:14:19.793 [INFO][5149] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.194/26] IPv6=[] ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" HandleID="k8s-pod-network.696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:19.847062 containerd[1828]: 2025-01-30 14:14:19.796 [INFO][5126] cni-plugin/k8s.go 386: Populated endpoint ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"", Pod:"csi-node-driver-mgkms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8d8375593a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:19.847062 containerd[1828]: 2025-01-30 14:14:19.796 [INFO][5126] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.194/32] ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:19.847062 containerd[1828]: 2025-01-30 14:14:19.796 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8d8375593a ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:19.847062 containerd[1828]: 2025-01-30 14:14:19.802 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:19.847062 containerd[1828]: 2025-01-30 14:14:19.803 [INFO][5126] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0", Pod:"csi-node-driver-mgkms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8d8375593a", MAC:"72:6b:71:da:0e:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:19.847062 containerd[1828]: 2025-01-30 14:14:19.838 [INFO][5126] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0" Namespace="calico-system" Pod="csi-node-driver-mgkms" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:14:19.929154 systemd-networkd[1394]: calib72db9c2ebb: Link UP Jan 30 14:14:19.929946 systemd-networkd[1394]: calib72db9c2ebb: Gained carrier Jan 30 14:14:19.961366 containerd[1828]: time="2025-01-30T14:14:19.960082623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:19.961366 containerd[1828]: time="2025-01-30T14:14:19.960147103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:19.961366 containerd[1828]: time="2025-01-30T14:14:19.960162783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:19.961366 containerd[1828]: time="2025-01-30T14:14:19.960263463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.699 [INFO][5135] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0 coredns-7db6d8ff4d- kube-system 76abe81d-4667-42d1-9922-dae522fdac2f 840 0 2025-01-30 14:13:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-eeb23789ea coredns-7db6d8ff4d-fwz54 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib72db9c2ebb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.700 [INFO][5135] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.754 [INFO][5153] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" HandleID="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.772 [INFO][5153] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" HandleID="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220b70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-eeb23789ea", "pod":"coredns-7db6d8ff4d-fwz54", "timestamp":"2025-01-30 14:14:19.754191128 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eeb23789ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.773 [INFO][5153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.793 [INFO][5153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.793 [INFO][5153] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eeb23789ea' Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.797 [INFO][5153] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.810 [INFO][5153] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.836 [INFO][5153] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.861 [INFO][5153] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.871 [INFO][5153] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.874 [INFO][5153] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.876 [INFO][5153] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8 Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.890 [INFO][5153] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.914 [INFO][5153] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.195/26] block=192.168.69.192/26 handle="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.914 [INFO][5153] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.195/26] handle="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.914 [INFO][5153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:19.977478 containerd[1828]: 2025-01-30 14:14:19.914 [INFO][5153] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.195/26] IPv6=[] ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" HandleID="k8s-pod-network.4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:19.981384 containerd[1828]: 2025-01-30 14:14:19.920 [INFO][5135] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76abe81d-4667-42d1-9922-dae522fdac2f", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"", Pod:"coredns-7db6d8ff4d-fwz54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib72db9c2ebb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:19.981384 containerd[1828]: 2025-01-30 14:14:19.920 [INFO][5135] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.195/32] ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:19.981384 containerd[1828]: 2025-01-30 14:14:19.920 [INFO][5135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib72db9c2ebb ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:19.981384 containerd[1828]: 2025-01-30 14:14:19.929 [INFO][5135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:19.981384 containerd[1828]: 2025-01-30 14:14:19.941 [INFO][5135] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76abe81d-4667-42d1-9922-dae522fdac2f", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8", Pod:"coredns-7db6d8ff4d-fwz54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib72db9c2ebb", MAC:"b6:82:1e:89:d7:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:19.981384 containerd[1828]: 2025-01-30 14:14:19.967 [INFO][5135] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fwz54" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:14:20.030196 containerd[1828]: time="2025-01-30T14:14:20.030098680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgkms,Uid:41b34c4a-b7b3-49a0-aec8-339d6c10a9dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0\"" Jan 30 14:14:20.037254 containerd[1828]: time="2025-01-30T14:14:20.034852956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:20.037254 containerd[1828]: time="2025-01-30T14:14:20.036503115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:20.037254 containerd[1828]: time="2025-01-30T14:14:20.036520875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:20.037254 containerd[1828]: time="2025-01-30T14:14:20.036661434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:20.083085 containerd[1828]: time="2025-01-30T14:14:20.083027433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fwz54,Uid:76abe81d-4667-42d1-9922-dae522fdac2f,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8\"" Jan 30 14:14:20.087801 containerd[1828]: time="2025-01-30T14:14:20.087714949Z" level=info msg="CreateContainer within sandbox \"4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:14:20.147177 containerd[1828]: time="2025-01-30T14:14:20.147049015Z" level=info msg="CreateContainer within sandbox \"4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3df807be7d885f67fe3124213adc2ba1c13d0b52967b7c3b482af85be95323c9\"" Jan 30 14:14:20.149703 containerd[1828]: time="2025-01-30T14:14:20.149305213Z" level=info msg="StartContainer for \"3df807be7d885f67fe3124213adc2ba1c13d0b52967b7c3b482af85be95323c9\"" Jan 30 14:14:20.203983 containerd[1828]: time="2025-01-30T14:14:20.203884484Z" level=info msg="StartContainer for \"3df807be7d885f67fe3124213adc2ba1c13d0b52967b7c3b482af85be95323c9\" returns successfully" Jan 30 14:14:20.244024 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Jan 30 14:14:20.646054 systemd[1]: run-containerd-runc-k8s.io-696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0-runc.O8y37K.mount: Deactivated successfully. Jan 30 14:14:20.820027 systemd-networkd[1394]: calic8d8375593a: Gained IPv6LL Jan 30 14:14:20.902876 kubelet[3543]: I0130 14:14:20.899987 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fwz54" podStartSLOduration=57.899968098 podStartE2EDuration="57.899968098s" podCreationTimestamp="2025-01-30 14:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:14:20.87570212 +0000 UTC m=+73.438071337" watchObservedRunningTime="2025-01-30 14:14:20.899968098 +0000 UTC m=+73.462337235" Jan 30 14:14:21.408880 containerd[1828]: time="2025-01-30T14:14:21.408749921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:21.412524 containerd[1828]: time="2025-01-30T14:14:21.412489317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 14:14:21.419667 containerd[1828]: time="2025-01-30T14:14:21.419635231Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:21.425838 containerd[1828]: time="2025-01-30T14:14:21.425738945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:21.427031 containerd[1828]: time="2025-01-30T14:14:21.426879184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 8.292402577s" Jan 30 14:14:21.427031 containerd[1828]: time="2025-01-30T14:14:21.426914344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:14:21.429445 containerd[1828]: time="2025-01-30T14:14:21.429148502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:14:21.431560 containerd[1828]: time="2025-01-30T14:14:21.431515380Z" level=info msg="CreateContainer within sandbox \"f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:14:21.459937 systemd-networkd[1394]: calib72db9c2ebb: Gained IPv6LL Jan 30 14:14:21.490853 containerd[1828]: time="2025-01-30T14:14:21.490805647Z" level=info msg="CreateContainer within sandbox \"f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56aa692fb1f6b46b7fd257d90fc21091d0bbd4e76d1b32690cbdfaa26ac992d3\"" Jan 30 14:14:21.492959 containerd[1828]: time="2025-01-30T14:14:21.491795406Z" level=info msg="StartContainer for \"56aa692fb1f6b46b7fd257d90fc21091d0bbd4e76d1b32690cbdfaa26ac992d3\"" Jan 30 14:14:21.565037 containerd[1828]: time="2025-01-30T14:14:21.564957700Z" level=info msg="StartContainer for \"56aa692fb1f6b46b7fd257d90fc21091d0bbd4e76d1b32690cbdfaa26ac992d3\" returns successfully" Jan 30 14:14:21.636682 systemd[1]: run-containerd-runc-k8s.io-56aa692fb1f6b46b7fd257d90fc21091d0bbd4e76d1b32690cbdfaa26ac992d3-runc.yhl0Xv.mount: Deactivated successfully. Jan 30 14:14:22.971493 containerd[1828]: time="2025-01-30T14:14:22.969171198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:22.976456 containerd[1828]: time="2025-01-30T14:14:22.976404271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 14:14:22.983571 containerd[1828]: time="2025-01-30T14:14:22.983051585Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:23.120163 containerd[1828]: time="2025-01-30T14:14:23.120085942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:23.121658 containerd[1828]: time="2025-01-30T14:14:23.121188701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.691991919s" Jan 30 14:14:23.121658 containerd[1828]: time="2025-01-30T14:14:23.121232701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 14:14:23.128129 containerd[1828]: time="2025-01-30T14:14:23.128098655Z" level=info msg="CreateContainer within sandbox \"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:14:23.487491 kubelet[3543]: I0130 14:14:23.485641 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69bb947986-4c92w" podStartSLOduration=46.191604997 podStartE2EDuration="54.485619773s" podCreationTimestamp="2025-01-30 14:13:29 +0000 UTC" firstStartedPulling="2025-01-30 14:14:13.133899568 +0000 UTC m=+65.696268745" lastFinishedPulling="2025-01-30 14:14:21.427914384 +0000 UTC m=+73.990283521" observedRunningTime="2025-01-30 14:14:21.883735614 +0000 UTC m=+74.446104791" watchObservedRunningTime="2025-01-30 14:14:23.485619773 +0000 UTC m=+76.047988950" Jan 30 14:14:23.618235 containerd[1828]: time="2025-01-30T14:14:23.616435016Z" level=info msg="CreateContainer within sandbox \"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fd3501437165452c8e844c50ad0cb684f0fb9ef49dbc855cf948729c77b0559d\"" Jan 30 14:14:23.618433 containerd[1828]: time="2025-01-30T14:14:23.618388574Z" level=info msg="StartContainer for \"fd3501437165452c8e844c50ad0cb684f0fb9ef49dbc855cf948729c77b0559d\"" Jan 30 14:14:23.660165 systemd[1]: run-containerd-runc-k8s.io-fd3501437165452c8e844c50ad0cb684f0fb9ef49dbc855cf948729c77b0559d-runc.ntBOnM.mount: Deactivated successfully. Jan 30 14:14:23.705197 containerd[1828]: time="2025-01-30T14:14:23.705117456Z" level=info msg="StartContainer for \"fd3501437165452c8e844c50ad0cb684f0fb9ef49dbc855cf948729c77b0559d\" returns successfully" Jan 30 14:14:23.707898 containerd[1828]: time="2025-01-30T14:14:23.707687214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:14:25.544202 containerd[1828]: time="2025-01-30T14:14:25.544062483Z" level=info msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\"" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.591 [INFO][5426] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.592 [INFO][5426] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" iface="eth0" netns="/var/run/netns/cni-c015ddbf-b8e1-5e04-0ef7-062b781bce67" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.592 [INFO][5426] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" iface="eth0" netns="/var/run/netns/cni-c015ddbf-b8e1-5e04-0ef7-062b781bce67" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.592 [INFO][5426] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" iface="eth0" netns="/var/run/netns/cni-c015ddbf-b8e1-5e04-0ef7-062b781bce67" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.592 [INFO][5426] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.592 [INFO][5426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.613 [INFO][5432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.614 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.614 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.622 [WARNING][5432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.622 [INFO][5432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.623 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:25.628694 containerd[1828]: 2025-01-30 14:14:25.625 [INFO][5426] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:14:25.629843 containerd[1828]: time="2025-01-30T14:14:25.628957326Z" level=info msg="TearDown network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" successfully" Jan 30 14:14:25.629843 containerd[1828]: time="2025-01-30T14:14:25.629000846Z" level=info msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" returns successfully" Jan 30 14:14:25.629964 systemd[1]: run-netns-cni\x2dc015ddbf\x2db8e1\x2d5e04\x2d0ef7\x2d062b781bce67.mount: Deactivated successfully. Jan 30 14:14:25.632166 containerd[1828]: time="2025-01-30T14:14:25.630086605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-b6l82,Uid:05962a30-7d91-453e-aaf3-1d47860b0219,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:14:25.811986 systemd-networkd[1394]: cali633ba3c96e1: Link UP Jan 30 14:14:25.812226 systemd-networkd[1394]: cali633ba3c96e1: Gained carrier Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.727 [INFO][5439] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0 calico-apiserver-69bb947986- calico-apiserver 05962a30-7d91-453e-aaf3-1d47860b0219 903 0 2025-01-30 14:13:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69bb947986 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-eeb23789ea calico-apiserver-69bb947986-b6l82 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali633ba3c96e1 [] []}} ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.727 [INFO][5439] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.756 [INFO][5449] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" HandleID="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.774 [INFO][5449] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" HandleID="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333a70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-eeb23789ea", "pod":"calico-apiserver-69bb947986-b6l82", "timestamp":"2025-01-30 14:14:25.755967692 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eeb23789ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.774 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.774 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.774 [INFO][5449] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eeb23789ea' Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.776 [INFO][5449] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.780 [INFO][5449] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.785 [INFO][5449] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.787 [INFO][5449] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.789 [INFO][5449] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.790 [INFO][5449] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.791 [INFO][5449] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225 Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.796 [INFO][5449] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.806 [INFO][5449] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.196/26] block=192.168.69.192/26 handle="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.806 [INFO][5449] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.196/26] handle="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.806 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:25.836422 containerd[1828]: 2025-01-30 14:14:25.806 [INFO][5449] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.196/26] IPv6=[] ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" HandleID="k8s-pod-network.0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.837177 containerd[1828]: 2025-01-30 14:14:25.808 [INFO][5439] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"05962a30-7d91-453e-aaf3-1d47860b0219", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"", Pod:"calico-apiserver-69bb947986-b6l82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali633ba3c96e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:25.837177 containerd[1828]: 2025-01-30 14:14:25.809 [INFO][5439] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.196/32] ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.837177 containerd[1828]: 2025-01-30 14:14:25.809 [INFO][5439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali633ba3c96e1 ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.837177 containerd[1828]: 2025-01-30 14:14:25.811 [INFO][5439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.837177 containerd[1828]: 2025-01-30 14:14:25.812 [INFO][5439] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"05962a30-7d91-453e-aaf3-1d47860b0219", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225", Pod:"calico-apiserver-69bb947986-b6l82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali633ba3c96e1", MAC:"7e:51:d4:3e:53:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:25.837177 containerd[1828]: 2025-01-30 14:14:25.832 [INFO][5439] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225" Namespace="calico-apiserver" Pod="calico-apiserver-69bb947986-b6l82" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:14:25.862558 containerd[1828]: time="2025-01-30T14:14:25.862410836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:25.862558 containerd[1828]: time="2025-01-30T14:14:25.862475636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:25.862558 containerd[1828]: time="2025-01-30T14:14:25.862503796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:25.863032 containerd[1828]: time="2025-01-30T14:14:25.862604716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:25.915015 containerd[1828]: time="2025-01-30T14:14:25.914974989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bb947986-b6l82,Uid:05962a30-7d91-453e-aaf3-1d47860b0219,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225\"" Jan 30 14:14:25.917857 containerd[1828]: time="2025-01-30T14:14:25.917747427Z" level=info msg="CreateContainer within sandbox \"0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:14:25.963973 containerd[1828]: time="2025-01-30T14:14:25.963925225Z" level=info msg="CreateContainer within sandbox \"0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d22a5598885ee921624d580069c930caeeae883d7dd4b37607779f90351f007a\"" Jan 30 14:14:25.964874 containerd[1828]: time="2025-01-30T14:14:25.964556904Z" level=info msg="StartContainer for \"d22a5598885ee921624d580069c930caeeae883d7dd4b37607779f90351f007a\"" Jan 30 14:14:26.024674 containerd[1828]: time="2025-01-30T14:14:26.024594370Z" level=info msg="StartContainer for \"d22a5598885ee921624d580069c930caeeae883d7dd4b37607779f90351f007a\" returns successfully" Jan 30 14:14:26.547385 containerd[1828]: time="2025-01-30T14:14:26.546123756Z" level=info msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\"" Jan 30 14:14:26.548665 containerd[1828]: time="2025-01-30T14:14:26.548620151Z" level=info msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\"" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.632 [INFO][5574] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.633 [INFO][5574] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" iface="eth0" netns="/var/run/netns/cni-a962c4c1-1284-0685-39b5-b5f759c805b9" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.633 [INFO][5574] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" iface="eth0" netns="/var/run/netns/cni-a962c4c1-1284-0685-39b5-b5f759c805b9" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.633 [INFO][5574] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" iface="eth0" netns="/var/run/netns/cni-a962c4c1-1284-0685-39b5-b5f759c805b9" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.633 [INFO][5574] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.634 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.683 [INFO][5586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.683 [INFO][5586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.683 [INFO][5586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.693 [WARNING][5586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.693 [INFO][5586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.696 [INFO][5586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:26.709561 containerd[1828]: 2025-01-30 14:14:26.703 [INFO][5574] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:14:26.717358 containerd[1828]: time="2025-01-30T14:14:26.716882065Z" level=info msg="TearDown network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" successfully" Jan 30 14:14:26.717358 containerd[1828]: time="2025-01-30T14:14:26.716928464Z" level=info msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" returns successfully" Jan 30 14:14:26.719374 systemd[1]: run-netns-cni\x2da962c4c1\x2d1284\x2d0685\x2d39b5\x2db5f759c805b9.mount: Deactivated successfully. Jan 30 14:14:26.721186 containerd[1828]: time="2025-01-30T14:14:26.719079940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b67bdf9-xj859,Uid:890e2792-22b0-41bc-a56a-9ffff22368a2,Namespace:calico-system,Attempt:1,}" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.669 [INFO][5573] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.669 [INFO][5573] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" iface="eth0" netns="/var/run/netns/cni-e61e3734-adc6-4b28-2c69-5a4e45b3108b" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.670 [INFO][5573] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" iface="eth0" netns="/var/run/netns/cni-e61e3734-adc6-4b28-2c69-5a4e45b3108b" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.671 [INFO][5573] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" iface="eth0" netns="/var/run/netns/cni-e61e3734-adc6-4b28-2c69-5a4e45b3108b" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.671 [INFO][5573] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.671 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.712 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.713 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.713 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.726 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.726 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.728 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:26.736517 containerd[1828]: 2025-01-30 14:14:26.729 [INFO][5573] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:14:26.738456 containerd[1828]: time="2025-01-30T14:14:26.737046501Z" level=info msg="TearDown network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" successfully" Jan 30 14:14:26.738456 containerd[1828]: time="2025-01-30T14:14:26.737094301Z" level=info msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" returns successfully" Jan 30 14:14:26.738456 containerd[1828]: time="2025-01-30T14:14:26.738092818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vv6cm,Uid:01dd86c1-a36d-4981-a399-a0bafb12e0de,Namespace:kube-system,Attempt:1,}" Jan 30 14:14:26.741420 systemd[1]: run-netns-cni\x2de61e3734\x2dadc6\x2d4b28\x2d2c69\x2d5a4e45b3108b.mount: Deactivated successfully. Jan 30 14:14:26.905787 systemd-networkd[1394]: cali633ba3c96e1: Gained IPv6LL Jan 30 14:14:27.026379 systemd-networkd[1394]: cali18ccae6808c: Link UP Jan 30 14:14:27.030949 systemd-networkd[1394]: cali18ccae6808c: Gained carrier Jan 30 14:14:27.048796 kubelet[3543]: I0130 14:14:27.048282 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69bb947986-b6l82" podStartSLOduration=58.048261583 podStartE2EDuration="58.048261583s" podCreationTimestamp="2025-01-30 14:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:14:26.932951794 +0000 UTC m=+79.495320971" watchObservedRunningTime="2025-01-30 14:14:27.048261583 +0000 UTC m=+79.610630720" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.872 [INFO][5600] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0 calico-kube-controllers-546b67bdf9- calico-system 890e2792-22b0-41bc-a56a-9ffff22368a2 914 0 2025-01-30 14:13:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:546b67bdf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-eeb23789ea calico-kube-controllers-546b67bdf9-xj859 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali18ccae6808c [] []}} ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.873 [INFO][5600] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.938 [INFO][5624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" HandleID="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.971 [INFO][5624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" HandleID="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fa3e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-eeb23789ea", "pod":"calico-kube-controllers-546b67bdf9-xj859", "timestamp":"2025-01-30 14:14:26.938450742 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eeb23789ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.972 [INFO][5624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.972 [INFO][5624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.972 [INFO][5624] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eeb23789ea' Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.974 [INFO][5624] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.979 [INFO][5624] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.987 [INFO][5624] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.989 [INFO][5624] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.991 [INFO][5624] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.993 [INFO][5624] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:26.995 [INFO][5624] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41 Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:27.005 [INFO][5624] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:27.015 [INFO][5624] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.197/26] block=192.168.69.192/26 handle="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:27.015 [INFO][5624] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.197/26] handle="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:27.015 [INFO][5624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:27.057087 containerd[1828]: 2025-01-30 14:14:27.015 [INFO][5624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.197/26] IPv6=[] ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" HandleID="k8s-pod-network.d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:27.059103 containerd[1828]: 2025-01-30 14:14:27.019 [INFO][5600] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0", GenerateName:"calico-kube-controllers-546b67bdf9-", Namespace:"calico-system", SelfLink:"", UID:"890e2792-22b0-41bc-a56a-9ffff22368a2", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b67bdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"", Pod:"calico-kube-controllers-546b67bdf9-xj859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18ccae6808c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:27.059103 containerd[1828]: 2025-01-30 14:14:27.020 [INFO][5600] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.197/32] ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:27.059103 containerd[1828]: 2025-01-30 14:14:27.020 [INFO][5600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18ccae6808c ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:27.059103 containerd[1828]: 2025-01-30 14:14:27.029 [INFO][5600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:27.059103 containerd[1828]: 2025-01-30 14:14:27.030 [INFO][5600] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0", GenerateName:"calico-kube-controllers-546b67bdf9-", Namespace:"calico-system", SelfLink:"", UID:"890e2792-22b0-41bc-a56a-9ffff22368a2", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b67bdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41", Pod:"calico-kube-controllers-546b67bdf9-xj859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18ccae6808c", MAC:"76:47:9c:35:ce:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:27.059103 containerd[1828]: 2025-01-30 14:14:27.052 [INFO][5600] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41" Namespace="calico-system" Pod="calico-kube-controllers-546b67bdf9-xj859" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:14:27.087680 systemd-networkd[1394]: cali234322d9f8e: Link UP Jan 30 14:14:27.087965 systemd-networkd[1394]: cali234322d9f8e: Gained carrier Jan 30 14:14:27.093787 containerd[1828]: time="2025-01-30T14:14:27.093506164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:27.093787 containerd[1828]: time="2025-01-30T14:14:27.093577364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:27.093787 containerd[1828]: time="2025-01-30T14:14:27.093592124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:27.093787 containerd[1828]: time="2025-01-30T14:14:27.093689164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:26.903 [INFO][5611] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0 coredns-7db6d8ff4d- kube-system 01dd86c1-a36d-4981-a399-a0bafb12e0de 915 0 2025-01-30 14:13:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-eeb23789ea coredns-7db6d8ff4d-vv6cm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali234322d9f8e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:26.903 [INFO][5611] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:26.977 [INFO][5630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" HandleID="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:26.999 [INFO][5630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" HandleID="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bb310), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-eeb23789ea", "pod":"coredns-7db6d8ff4d-vv6cm", "timestamp":"2025-01-30 14:14:26.977805616 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eeb23789ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:26.999 [INFO][5630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.016 [INFO][5630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.016 [INFO][5630] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eeb23789ea' Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.019 [INFO][5630] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.026 [INFO][5630] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.039 [INFO][5630] ipam/ipam.go 489: Trying affinity for 192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.046 [INFO][5630] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.053 [INFO][5630] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.054 [INFO][5630] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.057 [INFO][5630] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56 Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.064 [INFO][5630] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.075 [INFO][5630] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.198/26] block=192.168.69.192/26 handle="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.075 [INFO][5630] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.198/26] handle="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" host="ci-4081.3.0-a-eeb23789ea" Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.075 [INFO][5630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:27.113564 containerd[1828]: 2025-01-30 14:14:27.075 [INFO][5630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.198/26] IPv6=[] ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" HandleID="k8s-pod-network.27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:27.114123 containerd[1828]: 2025-01-30 14:14:27.081 [INFO][5611] cni-plugin/k8s.go 386: Populated endpoint ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01dd86c1-a36d-4981-a399-a0bafb12e0de", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"", Pod:"coredns-7db6d8ff4d-vv6cm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali234322d9f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:27.114123 containerd[1828]: 2025-01-30 14:14:27.082 [INFO][5611] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.198/32] ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:27.114123 containerd[1828]: 2025-01-30 14:14:27.082 [INFO][5611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali234322d9f8e ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:27.114123 containerd[1828]: 2025-01-30 14:14:27.088 [INFO][5611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:27.114123 containerd[1828]: 2025-01-30 14:14:27.088 [INFO][5611] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01dd86c1-a36d-4981-a399-a0bafb12e0de", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56", Pod:"coredns-7db6d8ff4d-vv6cm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali234322d9f8e", MAC:"86:00:bc:df:01:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:27.114123 containerd[1828]: 2025-01-30 14:14:27.104 [INFO][5611] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vv6cm" WorkloadEndpoint="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:14:27.143265 containerd[1828]: time="2025-01-30T14:14:27.142467458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:27.143265 containerd[1828]: time="2025-01-30T14:14:27.142588057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:27.143265 containerd[1828]: time="2025-01-30T14:14:27.142635737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:27.143265 containerd[1828]: time="2025-01-30T14:14:27.143031616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:27.223569 containerd[1828]: time="2025-01-30T14:14:27.223524361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vv6cm,Uid:01dd86c1-a36d-4981-a399-a0bafb12e0de,Namespace:kube-system,Attempt:1,} returns sandbox id \"27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56\"" Jan 30 14:14:27.232853 containerd[1828]: time="2025-01-30T14:14:27.232675981Z" level=info msg="CreateContainer within sandbox \"27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:14:27.260531 containerd[1828]: time="2025-01-30T14:14:27.259918002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b67bdf9-xj859,Uid:890e2792-22b0-41bc-a56a-9ffff22368a2,Namespace:calico-system,Attempt:1,} returns sandbox id \"d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41\"" Jan 30 14:14:27.294069 containerd[1828]: time="2025-01-30T14:14:27.294017208Z" level=info msg="CreateContainer within sandbox \"27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e8dfee9a373d5b5bda493e9a49fdb21dbf380e62205f23ec4c7eb254f728944\"" Jan 30 14:14:27.295735 containerd[1828]: time="2025-01-30T14:14:27.294847006Z" level=info msg="StartContainer for \"8e8dfee9a373d5b5bda493e9a49fdb21dbf380e62205f23ec4c7eb254f728944\"" Jan 30 14:14:27.360800 containerd[1828]: time="2025-01-30T14:14:27.359970304Z" level=info msg="StartContainer for \"8e8dfee9a373d5b5bda493e9a49fdb21dbf380e62205f23ec4c7eb254f728944\" returns successfully" Jan 30 14:14:27.921268 kubelet[3543]: I0130 14:14:27.920614 3543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:14:27.947855 kubelet[3543]: I0130 14:14:27.946393 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vv6cm" podStartSLOduration=64.946372747 podStartE2EDuration="1m4.946372747s" podCreationTimestamp="2025-01-30 14:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:14:27.930395782 +0000 UTC m=+80.492764959" watchObservedRunningTime="2025-01-30 14:14:27.946372747 +0000 UTC m=+80.508741884" Jan 30 14:14:28.179955 systemd-networkd[1394]: cali234322d9f8e: Gained IPv6LL Jan 30 14:14:29.011943 systemd-networkd[1394]: cali18ccae6808c: Gained IPv6LL Jan 30 14:14:29.429652 containerd[1828]: time="2025-01-30T14:14:29.429515197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:29.433945 containerd[1828]: time="2025-01-30T14:14:29.433693068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 14:14:29.437556 containerd[1828]: time="2025-01-30T14:14:29.437469540Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:29.443996 containerd[1828]: time="2025-01-30T14:14:29.443932966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:29.445495 containerd[1828]: time="2025-01-30T14:14:29.444912364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 5.73718127s" Jan 30 14:14:29.445495 containerd[1828]: time="2025-01-30T14:14:29.444953844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 14:14:29.448022 containerd[1828]: time="2025-01-30T14:14:29.447913197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:14:29.450378 containerd[1828]: time="2025-01-30T14:14:29.450244112Z" level=info msg="CreateContainer within sandbox \"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:14:29.489541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113092303.mount: Deactivated successfully. Jan 30 14:14:29.505187 containerd[1828]: time="2025-01-30T14:14:29.505072513Z" level=info msg="CreateContainer within sandbox \"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8d0c9f863257145021f8a4574f620a4e67f22b4c92cadc72adae5bfa62afa744\"" Jan 30 14:14:29.506930 containerd[1828]: time="2025-01-30T14:14:29.506192830Z" level=info msg="StartContainer for \"8d0c9f863257145021f8a4574f620a4e67f22b4c92cadc72adae5bfa62afa744\"" Jan 30 14:14:29.588659 containerd[1828]: time="2025-01-30T14:14:29.588615291Z" level=info msg="StartContainer for \"8d0c9f863257145021f8a4574f620a4e67f22b4c92cadc72adae5bfa62afa744\" returns successfully" Jan 30 14:14:29.700192 kubelet[3543]: I0130 14:14:29.700036 3543 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:14:29.702566 kubelet[3543]: I0130 14:14:29.702542 3543 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:14:30.369789 kubelet[3543]: I0130 14:14:30.369636 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mgkms" podStartSLOduration=50.955737637 podStartE2EDuration="1m0.369616359s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="2025-01-30 14:14:20.032087719 +0000 UTC m=+72.594456896" lastFinishedPulling="2025-01-30 14:14:29.445966441 +0000 UTC m=+82.008335618" observedRunningTime="2025-01-30 14:14:29.941752762 +0000 UTC m=+82.504121939" watchObservedRunningTime="2025-01-30 14:14:30.369616359 +0000 UTC m=+82.931985616" Jan 30 14:14:33.012503 containerd[1828]: time="2025-01-30T14:14:33.012194864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:33.017005 containerd[1828]: time="2025-01-30T14:14:33.016798220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 14:14:33.063474 containerd[1828]: time="2025-01-30T14:14:33.063095183Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:33.108904 containerd[1828]: time="2025-01-30T14:14:33.108845587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:33.109698 containerd[1828]: time="2025-01-30T14:14:33.109499627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.66153679s" Jan 30 14:14:33.109698 containerd[1828]: time="2025-01-30T14:14:33.109540187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 14:14:33.131541 containerd[1828]: time="2025-01-30T14:14:33.131396169Z" level=info msg="CreateContainer within sandbox \"d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:14:33.469690 containerd[1828]: time="2025-01-30T14:14:33.469318781Z" level=info msg="CreateContainer within sandbox \"d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a9e71a1c52901febe317ad3284309b7c77328d4312abf39c3a4580fd6db5943f\"" Jan 30 14:14:33.470860 containerd[1828]: time="2025-01-30T14:14:33.470237741Z" level=info msg="StartContainer for \"a9e71a1c52901febe317ad3284309b7c77328d4312abf39c3a4580fd6db5943f\"" Jan 30 14:14:33.576794 containerd[1828]: time="2025-01-30T14:14:33.576725576Z" level=info msg="StartContainer for \"a9e71a1c52901febe317ad3284309b7c77328d4312abf39c3a4580fd6db5943f\" returns successfully" Jan 30 14:14:33.958965 kubelet[3543]: I0130 14:14:33.958730 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-546b67bdf9-xj859" podStartSLOduration=58.113052917 podStartE2EDuration="1m3.958608993s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="2025-01-30 14:14:27.26526399 +0000 UTC m=+79.827633167" lastFinishedPulling="2025-01-30 14:14:33.110820066 +0000 UTC m=+85.673189243" observedRunningTime="2025-01-30 14:14:33.957432114 +0000 UTC m=+86.519801251" watchObservedRunningTime="2025-01-30 14:14:33.958608993 +0000 UTC m=+86.520978170" Jan 30 14:14:44.709093 kubelet[3543]: I0130 14:14:44.708673 3543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:15:04.602055 systemd[1]: Started sshd@7-10.200.20.13:22-10.200.16.10:33500.service - OpenSSH per-connection server daemon (10.200.16.10:33500). Jan 30 14:15:05.034792 sshd[5992]: Accepted publickey for core from 10.200.16.10 port 33500 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:05.036824 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:05.041197 systemd-logind[1798]: New session 10 of user core. Jan 30 14:15:05.046046 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:15:05.506366 sshd[5992]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:05.510806 systemd[1]: sshd@7-10.200.20.13:22-10.200.16.10:33500.service: Deactivated successfully. Jan 30 14:15:05.515312 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:15:05.516389 systemd-logind[1798]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:15:05.517714 systemd-logind[1798]: Removed session 10. Jan 30 14:15:07.554465 containerd[1828]: time="2025-01-30T14:15:07.554361710Z" level=info msg="StopPodSandbox for \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\"" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.595 [WARNING][6023] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0", Pod:"csi-node-driver-mgkms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8d8375593a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.595 [INFO][6023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.596 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" iface="eth0" netns="" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.596 [INFO][6023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.596 [INFO][6023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.618 [INFO][6029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.618 [INFO][6029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.619 [INFO][6029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.631 [WARNING][6029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.631 [INFO][6029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.636 [INFO][6029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:07.645326 containerd[1828]: 2025-01-30 14:15:07.643 [INFO][6023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.645326 containerd[1828]: time="2025-01-30T14:15:07.645195979Z" level=info msg="TearDown network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\" successfully" Jan 30 14:15:07.645326 containerd[1828]: time="2025-01-30T14:15:07.645221859Z" level=info msg="StopPodSandbox for \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\" returns successfully" Jan 30 14:15:07.646717 containerd[1828]: time="2025-01-30T14:15:07.646235697Z" level=info msg="RemovePodSandbox for \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\"" Jan 30 14:15:07.646717 containerd[1828]: time="2025-01-30T14:15:07.646270857Z" level=info msg="Forcibly stopping sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\"" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.697 [WARNING][6048] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41b34c4a-b7b3-49a0-aec8-339d6c10a9dc", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"696d4fbdc2c59dce9533684ec4eb2dc63d67d02ffe924e989031cafc2a267fe0", Pod:"csi-node-driver-mgkms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8d8375593a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.698 [INFO][6048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.698 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" iface="eth0" netns="" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.698 [INFO][6048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.698 [INFO][6048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.737 [INFO][6054] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.740 [INFO][6054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.741 [INFO][6054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.749 [WARNING][6054] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.749 [INFO][6054] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" HandleID="k8s-pod-network.1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Workload="ci--4081.3.0--a--eeb23789ea-k8s-csi--node--driver--mgkms-eth0" Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.750 [INFO][6054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:07.753439 containerd[1828]: 2025-01-30 14:15:07.751 [INFO][6048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4" Jan 30 14:15:07.753439 containerd[1828]: time="2025-01-30T14:15:07.753372015Z" level=info msg="TearDown network for sandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\" successfully" Jan 30 14:15:07.766169 containerd[1828]: time="2025-01-30T14:15:07.765573952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:15:07.766169 containerd[1828]: time="2025-01-30T14:15:07.765674552Z" level=info msg="RemovePodSandbox \"1a83bb310b8ae22b6b3d12ae88820e243bfedd3c33db92029ff70c37c36710d4\" returns successfully" Jan 30 14:15:07.766424 containerd[1828]: time="2025-01-30T14:15:07.766386631Z" level=info msg="StopPodSandbox for \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\"" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.806 [WARNING][6072] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"d905625a-9071-4d1a-9572-328141962bc1", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1", Pod:"calico-apiserver-69bb947986-4c92w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28268635b69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.806 [INFO][6072] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.806 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" iface="eth0" netns="" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.806 [INFO][6072] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.806 [INFO][6072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.837 [INFO][6078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.837 [INFO][6078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.837 [INFO][6078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.852 [WARNING][6078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.852 [INFO][6078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.853 [INFO][6078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:07.856827 containerd[1828]: 2025-01-30 14:15:07.855 [INFO][6072] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.858243 containerd[1828]: time="2025-01-30T14:15:07.856833541Z" level=info msg="TearDown network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\" successfully" Jan 30 14:15:07.858243 containerd[1828]: time="2025-01-30T14:15:07.856860221Z" level=info msg="StopPodSandbox for \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\" returns successfully" Jan 30 14:15:07.858243 containerd[1828]: time="2025-01-30T14:15:07.858186778Z" level=info msg="RemovePodSandbox for \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\"" Jan 30 14:15:07.858243 containerd[1828]: time="2025-01-30T14:15:07.858224698Z" level=info msg="Forcibly stopping sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\"" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.894 [WARNING][6096] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"d905625a-9071-4d1a-9572-328141962bc1", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"f33e45608c6f928b938e6c239a51fac79fcdeb507b4c1f3f38e5fd06503f7be1", Pod:"calico-apiserver-69bb947986-4c92w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28268635b69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.895 [INFO][6096] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.895 [INFO][6096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" iface="eth0" netns="" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.895 [INFO][6096] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.895 [INFO][6096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.913 [INFO][6102] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.913 [INFO][6102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.913 [INFO][6102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.923 [WARNING][6102] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.923 [INFO][6102] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" HandleID="k8s-pod-network.ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--4c92w-eth0" Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.924 [INFO][6102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:07.928430 containerd[1828]: 2025-01-30 14:15:07.926 [INFO][6096] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7" Jan 30 14:15:07.929479 containerd[1828]: time="2025-01-30T14:15:07.928853285Z" level=info msg="TearDown network for sandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\" successfully" Jan 30 14:15:07.937718 containerd[1828]: time="2025-01-30T14:15:07.937611388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:15:07.938041 containerd[1828]: time="2025-01-30T14:15:07.937926748Z" level=info msg="RemovePodSandbox \"ebeb366bdde6163ed9174bdebffc0c179153339c4f40f1593e174cc5839595c7\" returns successfully" Jan 30 14:15:07.938717 containerd[1828]: time="2025-01-30T14:15:07.938613107Z" level=info msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\"" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:07.972 [WARNING][6120] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"05962a30-7d91-453e-aaf3-1d47860b0219", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225", Pod:"calico-apiserver-69bb947986-b6l82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali633ba3c96e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:07.972 [INFO][6120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:07.972 [INFO][6120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" iface="eth0" netns="" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:07.972 [INFO][6120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:07.972 [INFO][6120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:08.003 [INFO][6127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:08.003 [INFO][6127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:08.003 [INFO][6127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:08.013 [WARNING][6127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:08.013 [INFO][6127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:08.015 [INFO][6127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.018605 containerd[1828]: 2025-01-30 14:15:08.017 [INFO][6120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.018605 containerd[1828]: time="2025-01-30T14:15:08.018475636Z" level=info msg="TearDown network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" successfully" Jan 30 14:15:08.018605 containerd[1828]: time="2025-01-30T14:15:08.018502916Z" level=info msg="StopPodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" returns successfully" Jan 30 14:15:08.020016 containerd[1828]: time="2025-01-30T14:15:08.019499194Z" level=info msg="RemovePodSandbox for \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\"" Jan 30 14:15:08.020016 containerd[1828]: time="2025-01-30T14:15:08.019536154Z" level=info msg="Forcibly stopping sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\"" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.064 [WARNING][6145] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0", GenerateName:"calico-apiserver-69bb947986-", Namespace:"calico-apiserver", SelfLink:"", UID:"05962a30-7d91-453e-aaf3-1d47860b0219", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bb947986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"0ad5c2a5642a21e85c5887bb9c7cd98cdf6366fb0bd7db060af1814b69e2e225", Pod:"calico-apiserver-69bb947986-b6l82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali633ba3c96e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.064 [INFO][6145] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.064 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" iface="eth0" netns="" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.064 [INFO][6145] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.064 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.086 [INFO][6152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.086 [INFO][6152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.086 [INFO][6152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.097 [WARNING][6152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.097 [INFO][6152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" HandleID="k8s-pod-network.1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--apiserver--69bb947986--b6l82-eth0" Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.099 [INFO][6152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.101987 containerd[1828]: 2025-01-30 14:15:08.100 [INFO][6145] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7" Jan 30 14:15:08.102399 containerd[1828]: time="2025-01-30T14:15:08.102045119Z" level=info msg="TearDown network for sandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" successfully" Jan 30 14:15:08.111256 containerd[1828]: time="2025-01-30T14:15:08.111111622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:15:08.111256 containerd[1828]: time="2025-01-30T14:15:08.111188341Z" level=info msg="RemovePodSandbox \"1e2d7adfb2b18e37f872c28f186e37313145c0c341ac597edcb11db820ba12c7\" returns successfully" Jan 30 14:15:08.112525 containerd[1828]: time="2025-01-30T14:15:08.112421179Z" level=info msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\"" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.156 [WARNING][6170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0", GenerateName:"calico-kube-controllers-546b67bdf9-", Namespace:"calico-system", SelfLink:"", UID:"890e2792-22b0-41bc-a56a-9ffff22368a2", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b67bdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41", Pod:"calico-kube-controllers-546b67bdf9-xj859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18ccae6808c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.156 [INFO][6170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.156 [INFO][6170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" iface="eth0" netns="" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.156 [INFO][6170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.156 [INFO][6170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.178 [INFO][6176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.179 [INFO][6176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.179 [INFO][6176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.190 [WARNING][6176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.191 [INFO][6176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.192 [INFO][6176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.195890 containerd[1828]: 2025-01-30 14:15:08.194 [INFO][6170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.195890 containerd[1828]: time="2025-01-30T14:15:08.195765662Z" level=info msg="TearDown network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" successfully" Jan 30 14:15:08.195890 containerd[1828]: time="2025-01-30T14:15:08.195791942Z" level=info msg="StopPodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" returns successfully" Jan 30 14:15:08.196443 containerd[1828]: time="2025-01-30T14:15:08.196404861Z" level=info msg="RemovePodSandbox for \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\"" Jan 30 14:15:08.196482 containerd[1828]: time="2025-01-30T14:15:08.196453781Z" level=info msg="Forcibly stopping sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\"" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.237 [WARNING][6195] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0", GenerateName:"calico-kube-controllers-546b67bdf9-", Namespace:"calico-system", SelfLink:"", UID:"890e2792-22b0-41bc-a56a-9ffff22368a2", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b67bdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"d29c8f19f867e0050a164a04301e8cb225fb259376d9152792092d4149330c41", Pod:"calico-kube-controllers-546b67bdf9-xj859", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18ccae6808c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.237 [INFO][6195] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.237 [INFO][6195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" iface="eth0" netns="" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.237 [INFO][6195] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.237 [INFO][6195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.256 [INFO][6201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.256 [INFO][6201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.256 [INFO][6201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.265 [WARNING][6201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.265 [INFO][6201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" HandleID="k8s-pod-network.b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Workload="ci--4081.3.0--a--eeb23789ea-k8s-calico--kube--controllers--546b67bdf9--xj859-eth0" Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.268 [INFO][6201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.270895 containerd[1828]: 2025-01-30 14:15:08.269 [INFO][6195] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00" Jan 30 14:15:08.270895 containerd[1828]: time="2025-01-30T14:15:08.270867441Z" level=info msg="TearDown network for sandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" successfully" Jan 30 14:15:08.281201 containerd[1828]: time="2025-01-30T14:15:08.280959502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:15:08.281201 containerd[1828]: time="2025-01-30T14:15:08.281045941Z" level=info msg="RemovePodSandbox \"b9d3f760014df30b9adddb3efb56bd79a322d53b0b75a3628bce4e375b14be00\" returns successfully" Jan 30 14:15:08.283571 containerd[1828]: time="2025-01-30T14:15:08.281925260Z" level=info msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\"" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.320 [WARNING][6219] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01dd86c1-a36d-4981-a399-a0bafb12e0de", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56", Pod:"coredns-7db6d8ff4d-vv6cm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali234322d9f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.320 [INFO][6219] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.320 [INFO][6219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" iface="eth0" netns="" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.320 [INFO][6219] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.320 [INFO][6219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.342 [INFO][6225] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.342 [INFO][6225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.342 [INFO][6225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.350 [WARNING][6225] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.351 [INFO][6225] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.352 [INFO][6225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.354882 containerd[1828]: 2025-01-30 14:15:08.353 [INFO][6219] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.355826 containerd[1828]: time="2025-01-30T14:15:08.354928522Z" level=info msg="TearDown network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" successfully" Jan 30 14:15:08.355826 containerd[1828]: time="2025-01-30T14:15:08.355026402Z" level=info msg="StopPodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" returns successfully" Jan 30 14:15:08.356093 containerd[1828]: time="2025-01-30T14:15:08.355935080Z" level=info msg="RemovePodSandbox for \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\"" Jan 30 14:15:08.356093 containerd[1828]: time="2025-01-30T14:15:08.355968320Z" level=info msg="Forcibly stopping sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\"" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.397 [WARNING][6243] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01dd86c1-a36d-4981-a399-a0bafb12e0de", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"27cfd13bb06d12537f0be04cc7da6087af2933800d9d0a963f25b0a21d3a3f56", Pod:"coredns-7db6d8ff4d-vv6cm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali234322d9f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.398 [INFO][6243] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.398 [INFO][6243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" iface="eth0" netns="" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.398 [INFO][6243] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.398 [INFO][6243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.418 [INFO][6249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.418 [INFO][6249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.418 [INFO][6249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.428 [WARNING][6249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.428 [INFO][6249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" HandleID="k8s-pod-network.94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--vv6cm-eth0" Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.431 [INFO][6249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.437158 containerd[1828]: 2025-01-30 14:15:08.434 [INFO][6243] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25" Jan 30 14:15:08.437158 containerd[1828]: time="2025-01-30T14:15:08.436978048Z" level=info msg="TearDown network for sandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" successfully" Jan 30 14:15:08.455419 containerd[1828]: time="2025-01-30T14:15:08.455357933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:15:08.455584 containerd[1828]: time="2025-01-30T14:15:08.455435813Z" level=info msg="RemovePodSandbox \"94f83001e41983b3a92d0fd698dc6175399a0adf3424455e4e998c394b13aa25\" returns successfully" Jan 30 14:15:08.456098 containerd[1828]: time="2025-01-30T14:15:08.456062172Z" level=info msg="StopPodSandbox for \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\"" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.491 [WARNING][6267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76abe81d-4667-42d1-9922-dae522fdac2f", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8", Pod:"coredns-7db6d8ff4d-fwz54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib72db9c2ebb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.491 [INFO][6267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.491 [INFO][6267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" iface="eth0" netns="" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.491 [INFO][6267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.491 [INFO][6267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.517 [INFO][6273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.517 [INFO][6273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.517 [INFO][6273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.544 [WARNING][6273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.544 [INFO][6273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.546 [INFO][6273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.554379 containerd[1828]: 2025-01-30 14:15:08.548 [INFO][6267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.554379 containerd[1828]: time="2025-01-30T14:15:08.553415268Z" level=info msg="TearDown network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\" successfully" Jan 30 14:15:08.554379 containerd[1828]: time="2025-01-30T14:15:08.553439628Z" level=info msg="StopPodSandbox for \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\" returns successfully" Jan 30 14:15:08.557269 containerd[1828]: time="2025-01-30T14:15:08.556892822Z" level=info msg="RemovePodSandbox for \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\"" Jan 30 14:15:08.557269 containerd[1828]: time="2025-01-30T14:15:08.556931622Z" level=info msg="Forcibly stopping sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\"" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.600 [WARNING][6291] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76abe81d-4667-42d1-9922-dae522fdac2f", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eeb23789ea", ContainerID:"4a73e9d1e2f808d0488d33c72444a4683827236b3a0175aef1406192e42062c8", Pod:"coredns-7db6d8ff4d-fwz54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib72db9c2ebb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.601 [INFO][6291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.601 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" iface="eth0" netns="" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.601 [INFO][6291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.601 [INFO][6291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.621 [INFO][6297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.621 [INFO][6297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.621 [INFO][6297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.632 [WARNING][6297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.632 [INFO][6297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" HandleID="k8s-pod-network.59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Workload="ci--4081.3.0--a--eeb23789ea-k8s-coredns--7db6d8ff4d--fwz54-eth0" Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.635 [INFO][6297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:15:08.640910 containerd[1828]: 2025-01-30 14:15:08.639 [INFO][6291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd" Jan 30 14:15:08.642316 containerd[1828]: time="2025-01-30T14:15:08.641954222Z" level=info msg="TearDown network for sandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\" successfully" Jan 30 14:15:08.653297 containerd[1828]: time="2025-01-30T14:15:08.653244960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:15:08.653474 containerd[1828]: time="2025-01-30T14:15:08.653325440Z" level=info msg="RemovePodSandbox \"59aa90d8ca83b8b1ffc140c023eaf7aecc4e9eeb8c8a5d0b8f64b849d258b8cd\" returns successfully" Jan 30 14:15:10.582078 systemd[1]: Started sshd@8-10.200.20.13:22-10.200.16.10:39438.service - OpenSSH per-connection server daemon (10.200.16.10:39438). Jan 30 14:15:11.017610 sshd[6303]: Accepted publickey for core from 10.200.16.10 port 39438 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:11.019205 sshd[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:11.025820 systemd-logind[1798]: New session 11 of user core. Jan 30 14:15:11.030136 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:15:11.408919 sshd[6303]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:11.413911 systemd[1]: sshd@8-10.200.20.13:22-10.200.16.10:39438.service: Deactivated successfully. Jan 30 14:15:11.416944 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:15:11.417104 systemd-logind[1798]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:15:11.419392 systemd-logind[1798]: Removed session 11. Jan 30 14:15:16.496141 systemd[1]: Started sshd@9-10.200.20.13:22-10.200.16.10:38406.service - OpenSSH per-connection server daemon (10.200.16.10:38406). Jan 30 14:15:16.944481 sshd[6317]: Accepted publickey for core from 10.200.16.10 port 38406 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:16.946143 sshd[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:16.951414 systemd-logind[1798]: New session 12 of user core. Jan 30 14:15:16.957164 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:15:17.355933 sshd[6317]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:17.360243 systemd[1]: sshd@9-10.200.20.13:22-10.200.16.10:38406.service: Deactivated successfully. Jan 30 14:15:17.364534 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:15:17.366701 systemd-logind[1798]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:15:17.368108 systemd-logind[1798]: Removed session 12. Jan 30 14:15:17.435119 systemd[1]: Started sshd@10-10.200.20.13:22-10.200.16.10:38410.service - OpenSSH per-connection server daemon (10.200.16.10:38410). Jan 30 14:15:17.887170 sshd[6332]: Accepted publickey for core from 10.200.16.10 port 38410 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:17.888613 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:17.893354 systemd-logind[1798]: New session 13 of user core. Jan 30 14:15:17.898133 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:15:18.329213 sshd[6332]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:18.333627 systemd[1]: sshd@10-10.200.20.13:22-10.200.16.10:38410.service: Deactivated successfully. Jan 30 14:15:18.337077 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:15:18.338363 systemd-logind[1798]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:15:18.339872 systemd-logind[1798]: Removed session 13. Jan 30 14:15:18.405270 systemd[1]: Started sshd@11-10.200.20.13:22-10.200.16.10:38412.service - OpenSSH per-connection server daemon (10.200.16.10:38412). Jan 30 14:15:18.837322 sshd[6343]: Accepted publickey for core from 10.200.16.10 port 38412 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:18.838791 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:18.842794 systemd-logind[1798]: New session 14 of user core. Jan 30 14:15:18.847009 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:15:19.226338 sshd[6343]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:19.230843 systemd[1]: sshd@11-10.200.20.13:22-10.200.16.10:38412.service: Deactivated successfully. Jan 30 14:15:19.234718 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:15:19.235541 systemd-logind[1798]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:15:19.237266 systemd-logind[1798]: Removed session 14. Jan 30 14:15:24.300112 systemd[1]: Started sshd@12-10.200.20.13:22-10.200.16.10:38420.service - OpenSSH per-connection server daemon (10.200.16.10:38420). Jan 30 14:15:24.759809 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 38420 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:24.760556 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:24.768072 systemd-logind[1798]: New session 15 of user core. Jan 30 14:15:24.772510 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:15:25.155935 sshd[6359]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:25.159598 systemd[1]: sshd@12-10.200.20.13:22-10.200.16.10:38420.service: Deactivated successfully. Jan 30 14:15:25.164184 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:15:25.165158 systemd-logind[1798]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:15:25.166497 systemd-logind[1798]: Removed session 15. Jan 30 14:15:30.236054 systemd[1]: Started sshd@13-10.200.20.13:22-10.200.16.10:45432.service - OpenSSH per-connection server daemon (10.200.16.10:45432). Jan 30 14:15:30.685599 sshd[6396]: Accepted publickey for core from 10.200.16.10 port 45432 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:30.687275 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:30.692026 systemd-logind[1798]: New session 16 of user core. Jan 30 14:15:30.697166 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:15:31.072104 sshd[6396]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:31.075961 systemd-logind[1798]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:15:31.076345 systemd[1]: sshd@13-10.200.20.13:22-10.200.16.10:45432.service: Deactivated successfully. Jan 30 14:15:31.081062 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:15:31.082365 systemd-logind[1798]: Removed session 16. Jan 30 14:15:36.152073 systemd[1]: Started sshd@14-10.200.20.13:22-10.200.16.10:43020.service - OpenSSH per-connection server daemon (10.200.16.10:43020). Jan 30 14:15:36.583454 sshd[6432]: Accepted publickey for core from 10.200.16.10 port 43020 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:36.585053 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:36.589171 systemd-logind[1798]: New session 17 of user core. Jan 30 14:15:36.592236 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:15:36.976696 sshd[6432]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:36.980589 systemd[1]: sshd@14-10.200.20.13:22-10.200.16.10:43020.service: Deactivated successfully. Jan 30 14:15:36.983562 systemd-logind[1798]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:15:36.984037 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:15:36.987299 systemd-logind[1798]: Removed session 17. Jan 30 14:15:42.054096 systemd[1]: Started sshd@15-10.200.20.13:22-10.200.16.10:43026.service - OpenSSH per-connection server daemon (10.200.16.10:43026). Jan 30 14:15:42.485370 sshd[6471]: Accepted publickey for core from 10.200.16.10 port 43026 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:42.485980 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:42.492395 systemd-logind[1798]: New session 18 of user core. Jan 30 14:15:42.496079 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:15:42.886235 sshd[6471]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:42.889939 systemd[1]: sshd@15-10.200.20.13:22-10.200.16.10:43026.service: Deactivated successfully. Jan 30 14:15:42.894591 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:15:42.895981 systemd-logind[1798]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:15:42.897323 systemd-logind[1798]: Removed session 18. Jan 30 14:15:42.975073 systemd[1]: Started sshd@16-10.200.20.13:22-10.200.16.10:43042.service - OpenSSH per-connection server daemon (10.200.16.10:43042). Jan 30 14:15:43.405656 sshd[6488]: Accepted publickey for core from 10.200.16.10 port 43042 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:43.407106 sshd[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:43.412794 systemd-logind[1798]: New session 19 of user core. Jan 30 14:15:43.421877 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:15:43.924187 sshd[6488]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:43.928162 systemd-logind[1798]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:15:43.929081 systemd[1]: sshd@16-10.200.20.13:22-10.200.16.10:43042.service: Deactivated successfully. Jan 30 14:15:43.932188 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:15:43.934253 systemd-logind[1798]: Removed session 19. Jan 30 14:15:44.002049 systemd[1]: Started sshd@17-10.200.20.13:22-10.200.16.10:43056.service - OpenSSH per-connection server daemon (10.200.16.10:43056). Jan 30 14:15:44.432736 sshd[6500]: Accepted publickey for core from 10.200.16.10 port 43056 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:44.434223 sshd[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:44.439057 systemd-logind[1798]: New session 20 of user core. Jan 30 14:15:44.443138 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:15:46.557146 sshd[6500]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:46.562132 systemd[1]: sshd@17-10.200.20.13:22-10.200.16.10:43056.service: Deactivated successfully. Jan 30 14:15:46.565748 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:15:46.567828 systemd-logind[1798]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:15:46.569486 systemd-logind[1798]: Removed session 20. Jan 30 14:15:46.634940 systemd[1]: Started sshd@18-10.200.20.13:22-10.200.16.10:60020.service - OpenSSH per-connection server daemon (10.200.16.10:60020). Jan 30 14:15:47.070665 sshd[6536]: Accepted publickey for core from 10.200.16.10 port 60020 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:47.073621 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:47.078043 systemd-logind[1798]: New session 21 of user core. Jan 30 14:15:47.083601 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:15:47.582330 sshd[6536]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:47.587196 systemd[1]: sshd@18-10.200.20.13:22-10.200.16.10:60020.service: Deactivated successfully. Jan 30 14:15:47.590817 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:15:47.592032 systemd-logind[1798]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:15:47.593543 systemd-logind[1798]: Removed session 21. Jan 30 14:15:47.660371 systemd[1]: Started sshd@19-10.200.20.13:22-10.200.16.10:60032.service - OpenSSH per-connection server daemon (10.200.16.10:60032). Jan 30 14:15:48.094059 sshd[6550]: Accepted publickey for core from 10.200.16.10 port 60032 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:48.095445 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:48.099925 systemd-logind[1798]: New session 22 of user core. Jan 30 14:15:48.109077 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:15:48.478721 sshd[6550]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:48.481892 systemd[1]: sshd@19-10.200.20.13:22-10.200.16.10:60032.service: Deactivated successfully. Jan 30 14:15:48.486542 systemd-logind[1798]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:15:48.487237 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:15:48.489121 systemd-logind[1798]: Removed session 22. Jan 30 14:15:53.556024 systemd[1]: Started sshd@20-10.200.20.13:22-10.200.16.10:60042.service - OpenSSH per-connection server daemon (10.200.16.10:60042). Jan 30 14:15:53.991519 sshd[6566]: Accepted publickey for core from 10.200.16.10 port 60042 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:53.993005 sshd[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:53.997299 systemd-logind[1798]: New session 23 of user core. Jan 30 14:15:54.003183 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:15:54.381951 sshd[6566]: pam_unix(sshd:session): session closed for user core Jan 30 14:15:54.386664 systemd[1]: sshd@20-10.200.20.13:22-10.200.16.10:60042.service: Deactivated successfully. Jan 30 14:15:54.389997 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:15:54.390428 systemd-logind[1798]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:15:54.393009 systemd-logind[1798]: Removed session 23. Jan 30 14:15:59.463030 systemd[1]: Started sshd@21-10.200.20.13:22-10.200.16.10:58430.service - OpenSSH per-connection server daemon (10.200.16.10:58430). Jan 30 14:15:59.910612 sshd[6601]: Accepted publickey for core from 10.200.16.10 port 58430 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:15:59.912257 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:15:59.918827 systemd-logind[1798]: New session 24 of user core. Jan 30 14:15:59.925778 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:16:00.313413 sshd[6601]: pam_unix(sshd:session): session closed for user core Jan 30 14:16:00.318695 systemd[1]: sshd@21-10.200.20.13:22-10.200.16.10:58430.service: Deactivated successfully. Jan 30 14:16:00.323143 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:16:00.324319 systemd-logind[1798]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:16:00.325306 systemd-logind[1798]: Removed session 24. Jan 30 14:16:05.384065 systemd[1]: Started sshd@22-10.200.20.13:22-10.200.16.10:58432.service - OpenSSH per-connection server daemon (10.200.16.10:58432). Jan 30 14:16:05.794321 sshd[6635]: Accepted publickey for core from 10.200.16.10 port 58432 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:16:05.795806 sshd[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:16:05.799979 systemd-logind[1798]: New session 25 of user core. Jan 30 14:16:05.807241 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:16:06.168047 sshd[6635]: pam_unix(sshd:session): session closed for user core Jan 30 14:16:06.171936 systemd[1]: sshd@22-10.200.20.13:22-10.200.16.10:58432.service: Deactivated successfully. Jan 30 14:16:06.176309 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:16:06.177159 systemd-logind[1798]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:16:06.179035 systemd-logind[1798]: Removed session 25. Jan 30 14:16:11.250051 systemd[1]: Started sshd@23-10.200.20.13:22-10.200.16.10:52734.service - OpenSSH per-connection server daemon (10.200.16.10:52734). Jan 30 14:16:11.682217 sshd[6656]: Accepted publickey for core from 10.200.16.10 port 52734 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:16:11.683626 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:16:11.689267 systemd-logind[1798]: New session 26 of user core. Jan 30 14:16:11.692011 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:16:12.059143 sshd[6656]: pam_unix(sshd:session): session closed for user core Jan 30 14:16:12.062523 systemd[1]: sshd@23-10.200.20.13:22-10.200.16.10:52734.service: Deactivated successfully. Jan 30 14:16:12.062705 systemd-logind[1798]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:16:12.067059 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:16:12.069710 systemd-logind[1798]: Removed session 26. Jan 30 14:16:17.133247 systemd[1]: Started sshd@24-10.200.20.13:22-10.200.16.10:60540.service - OpenSSH per-connection server daemon (10.200.16.10:60540). Jan 30 14:16:17.567863 sshd[6670]: Accepted publickey for core from 10.200.16.10 port 60540 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:16:17.569283 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:16:17.574051 systemd-logind[1798]: New session 27 of user core. Jan 30 14:16:17.584135 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:16:17.963490 sshd[6670]: pam_unix(sshd:session): session closed for user core Jan 30 14:16:17.969194 systemd[1]: sshd@24-10.200.20.13:22-10.200.16.10:60540.service: Deactivated successfully. Jan 30 14:16:17.969942 systemd-logind[1798]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:16:17.972195 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:16:17.975834 systemd-logind[1798]: Removed session 27. Jan 30 14:16:23.039032 systemd[1]: Started sshd@25-10.200.20.13:22-10.200.16.10:60550.service - OpenSSH per-connection server daemon (10.200.16.10:60550). Jan 30 14:16:23.452222 sshd[6685]: Accepted publickey for core from 10.200.16.10 port 60550 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:16:23.453707 sshd[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:16:23.458005 systemd-logind[1798]: New session 28 of user core. Jan 30 14:16:23.465147 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:16:23.818619 sshd[6685]: pam_unix(sshd:session): session closed for user core Jan 30 14:16:23.823189 systemd[1]: sshd@25-10.200.20.13:22-10.200.16.10:60550.service: Deactivated successfully. Jan 30 14:16:23.827354 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:16:23.828608 systemd-logind[1798]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:16:23.829659 systemd-logind[1798]: Removed session 28.