Feb 13 20:43:48.396469 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:43:48.396494 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:43:48.396502 kernel: KASLR enabled Feb 13 20:43:48.396508 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 20:43:48.396515 kernel: printk: bootconsole [pl11] enabled Feb 13 20:43:48.396521 kernel: efi: EFI v2.7 by EDK II Feb 13 20:43:48.396528 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Feb 13 20:43:48.396534 kernel: random: crng init done Feb 13 20:43:48.396540 kernel: ACPI: Early table checksum verification disabled Feb 13 20:43:48.396546 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 20:43:48.396553 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396559 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396566 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 20:43:48.396572 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396580 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396586 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396593 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396601 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396608 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396614 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 20:43:48.396621 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396627 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 20:43:48.396633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 20:43:48.396640 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 20:43:48.396646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 20:43:48.396653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 20:43:48.396659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 20:43:48.396665 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 20:43:48.396673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 20:43:48.396680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 20:43:48.396686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 20:43:48.396693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 20:43:48.396699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 20:43:48.396706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 20:43:48.396712 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Feb 13 20:43:48.396718 kernel: Zone ranges: Feb 13 20:43:48.396725 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 20:43:48.396731 kernel: DMA32 empty Feb 13 20:43:48.396737 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:43:48.396744 kernel: Movable zone start for each node Feb 13 20:43:48.396754 kernel: Early memory node ranges Feb 13 20:43:48.396761 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 20:43:48.396768 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Feb 13 20:43:48.396774 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 20:43:48.396781 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 20:43:48.396789 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 20:43:48.396796 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 20:43:48.396803 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:43:48.396810 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 20:43:48.396816 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 20:43:48.396823 kernel: psci: probing for conduit method from ACPI. Feb 13 20:43:48.396830 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:43:48.396836 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:43:48.396843 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 20:43:48.396850 kernel: psci: SMC Calling Convention v1.4 Feb 13 20:43:48.396857 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 20:43:48.396863 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 20:43:48.396872 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:43:48.396879 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:43:48.396886 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 20:43:48.396892 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:43:48.396899 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:43:48.396906 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:43:48.396913 kernel: CPU features: detected: Spectre-BHB Feb 13 20:43:48.396920 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:43:48.396926 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:43:48.396933 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:43:48.396940 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 20:43:48.396948 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:43:48.396955 kernel: alternatives: applying boot alternatives Feb 13 20:43:48.396963 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:43:48.396971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:43:48.396978 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:43:48.396985 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:43:48.396991 kernel: Fallback order for Node 0: 0 Feb 13 20:43:48.396998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 20:43:48.397005 kernel: Policy zone: Normal Feb 13 20:43:48.397011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:43:48.397018 kernel: software IO TLB: area num 2. Feb 13 20:43:48.397026 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Feb 13 20:43:48.397033 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Feb 13 20:43:48.397040 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:43:48.397047 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:43:48.397055 kernel: rcu: RCU event tracing is enabled. Feb 13 20:43:48.397062 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:43:48.397068 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:43:48.397075 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:43:48.397082 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:43:48.397089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:43:48.397096 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:43:48.397104 kernel: GICv3: 960 SPIs implemented Feb 13 20:43:48.397111 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:43:48.397117 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:43:48.397124 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:43:48.397131 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 20:43:48.397138 kernel: ITS: No ITS available, not enabling LPIs Feb 13 20:43:48.397145 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:43:48.397152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:43:48.397158 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:43:48.397165 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:43:48.397172 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:43:48.397181 kernel: Console: colour dummy device 80x25 Feb 13 20:43:48.397188 kernel: printk: console [tty1] enabled Feb 13 20:43:48.397195 kernel: ACPI: Core revision 20230628 Feb 13 20:43:48.397202 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:43:48.397209 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:43:48.397216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:43:48.397223 kernel: landlock: Up and running. Feb 13 20:43:48.397230 kernel: SELinux: Initializing. Feb 13 20:43:48.397237 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397244 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397252 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:48.397259 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:48.397267 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 20:43:48.397274 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 20:43:48.397281 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:43:48.397287 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:43:48.397295 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:43:48.397308 kernel: Remapping and enabling EFI services. Feb 13 20:43:48.397316 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:43:48.397323 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:43:48.397330 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 20:43:48.397339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:43:48.397347 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:43:48.397354 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:43:48.397376 kernel: SMP: Total of 2 processors activated. Feb 13 20:43:48.397386 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:43:48.397396 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 20:43:48.397403 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:43:48.397411 kernel: CPU features: detected: CRC32 instructions Feb 13 20:43:48.397418 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:43:48.397426 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:43:48.397433 kernel: CPU features: detected: Privileged Access Never Feb 13 20:43:48.397440 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:43:48.397448 kernel: alternatives: applying system-wide alternatives Feb 13 20:43:48.397455 kernel: devtmpfs: initialized Feb 13 20:43:48.397464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:43:48.397471 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:43:48.397479 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:43:48.397486 kernel: SMBIOS 3.1.0 present. Feb 13 20:43:48.397494 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 20:43:48.397501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:43:48.397509 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:43:48.397516 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:43:48.397524 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:43:48.397532 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:43:48.397540 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 20:43:48.397547 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:43:48.397554 kernel: cpuidle: using governor menu Feb 13 20:43:48.397562 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:43:48.397569 kernel: ASID allocator initialised with 32768 entries Feb 13 20:43:48.397576 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:43:48.397584 kernel: Serial: AMBA PL011 UART driver Feb 13 20:43:48.397591 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:43:48.397600 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:43:48.397607 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:43:48.397615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:43:48.397622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:43:48.397629 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:43:48.397637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:43:48.397644 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:43:48.397651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:43:48.397659 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:43:48.397667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:43:48.397675 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:43:48.397682 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:43:48.397690 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:43:48.397697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:43:48.397704 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:43:48.397712 kernel: ACPI: Interpreter enabled Feb 13 20:43:48.397719 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:43:48.397726 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:43:48.397735 kernel: printk: console [ttyAMA0] enabled Feb 13 20:43:48.397742 kernel: printk: bootconsole [pl11] disabled Feb 13 20:43:48.397750 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 20:43:48.397757 kernel: iommu: Default domain type: Translated Feb 13 20:43:48.397764 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:43:48.397772 kernel: efivars: Registered efivars operations Feb 13 20:43:48.397779 kernel: vgaarb: loaded Feb 13 20:43:48.397786 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:43:48.397794 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:43:48.397803 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:43:48.397810 kernel: pnp: PnP ACPI init Feb 13 20:43:48.397817 kernel: pnp: PnP ACPI: found 0 devices Feb 13 20:43:48.397824 kernel: NET: Registered PF_INET protocol family Feb 13 20:43:48.397832 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:43:48.397839 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:43:48.397847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:43:48.397854 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:43:48.397862 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:43:48.397870 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:43:48.397878 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397885 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:43:48.397900 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:43:48.397907 kernel: kvm [1]: HYP mode not available Feb 13 20:43:48.397915 kernel: Initialise system trusted keyrings Feb 13 20:43:48.397922 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:43:48.397929 kernel: Key type asymmetric registered Feb 13 20:43:48.397938 kernel: Asymmetric key parser 'x509' registered Feb 13 20:43:48.397945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:43:48.397953 kernel: io scheduler mq-deadline registered Feb 13 20:43:48.397960 kernel: io scheduler kyber registered Feb 13 20:43:48.397968 kernel: io scheduler bfq registered Feb 13 20:43:48.397975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:43:48.397982 kernel: thunder_xcv, ver 1.0 Feb 13 20:43:48.397989 kernel: thunder_bgx, ver 1.0 Feb 13 20:43:48.397997 kernel: nicpf, ver 1.0 Feb 13 20:43:48.398004 kernel: nicvf, ver 1.0 Feb 13 20:43:48.398162 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:43:48.398237 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:43:47 UTC (1739479427) Feb 13 20:43:48.398247 kernel: efifb: probing for efifb Feb 13 20:43:48.398255 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:43:48.398262 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:43:48.398269 kernel: efifb: scrolling: redraw Feb 13 20:43:48.398277 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:43:48.398287 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:43:48.398294 kernel: fb0: EFI VGA frame buffer device Feb 13 20:43:48.398302 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 20:43:48.398309 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:43:48.398317 kernel: No ACPI PMU IRQ for CPU0 Feb 13 20:43:48.398324 kernel: No ACPI PMU IRQ for CPU1 Feb 13 20:43:48.398331 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 20:43:48.398338 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:43:48.398346 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:43:48.398355 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:43:48.398375 kernel: Segment Routing with IPv6 Feb 13 20:43:48.398383 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:43:48.398391 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:43:48.398398 kernel: Key type dns_resolver registered Feb 13 20:43:48.398405 kernel: registered taskstats version 1 Feb 13 20:43:48.398412 kernel: Loading compiled-in X.509 certificates Feb 13 20:43:48.398420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:43:48.398427 kernel: Key type .fscrypt registered Feb 13 20:43:48.398436 kernel: Key type fscrypt-provisioning registered Feb 13 20:43:48.398443 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:43:48.398451 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:43:48.398458 kernel: ima: No architecture policies found Feb 13 20:43:48.398465 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:43:48.398473 kernel: clk: Disabling unused clocks Feb 13 20:43:48.398480 kernel: Freeing unused kernel memory: 39360K Feb 13 20:43:48.398487 kernel: Run /init as init process Feb 13 20:43:48.398494 kernel: with arguments: Feb 13 20:43:48.398503 kernel: /init Feb 13 20:43:48.398510 kernel: with environment: Feb 13 20:43:48.398517 kernel: HOME=/ Feb 13 20:43:48.398524 kernel: TERM=linux Feb 13 20:43:48.398532 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:43:48.398541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:43:48.398551 systemd[1]: Detected virtualization microsoft. Feb 13 20:43:48.398558 systemd[1]: Detected architecture arm64. Feb 13 20:43:48.398568 systemd[1]: Running in initrd. Feb 13 20:43:48.398575 systemd[1]: No hostname configured, using default hostname. Feb 13 20:43:48.398583 systemd[1]: Hostname set to . Feb 13 20:43:48.398591 systemd[1]: Initializing machine ID from random generator. Feb 13 20:43:48.398599 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:43:48.398607 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:48.398615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:48.398623 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:43:48.398633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:43:48.398641 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:43:48.398649 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:43:48.398658 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:43:48.398667 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:43:48.398674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:48.398684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:48.398692 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:43:48.398699 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:43:48.398707 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:43:48.398715 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:43:48.398723 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:48.398731 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:48.398739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:43:48.398747 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:43:48.398756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:48.398764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:48.398772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:48.398780 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:43:48.398788 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:43:48.398796 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:43:48.398804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:43:48.398811 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:43:48.398819 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:43:48.398829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:43:48.398855 systemd-journald[217]: Collecting audit messages is disabled. Feb 13 20:43:48.398875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:48.398884 systemd-journald[217]: Journal started Feb 13 20:43:48.398905 systemd-journald[217]: Runtime Journal (/run/log/journal/ee5977e288eb4ea788114cef6a2a52c7) is 8.0M, max 78.5M, 70.5M free. Feb 13 20:43:48.410873 systemd-modules-load[218]: Inserted module 'overlay' Feb 13 20:43:48.428145 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:43:48.428647 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:48.443112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:48.468451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:43:48.468516 kernel: Bridge firewalling registered Feb 13 20:43:48.471213 systemd-modules-load[218]: Inserted module 'br_netfilter' Feb 13 20:43:48.472736 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:43:48.482180 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:48.493503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:48.517805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:48.531560 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:43:48.550279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:43:48.562593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:43:48.597547 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:48.618056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:48.626624 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:43:48.648555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:43:48.663563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:43:48.689053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:48.720286 dracut-cmdline[248]: dracut-dracut-053 Feb 13 20:43:48.734753 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:43:48.720925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:43:48.729254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:48.814157 systemd-resolved[257]: Positive Trust Anchors: Feb 13 20:43:48.814171 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:43:48.814203 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:43:48.821628 systemd-resolved[257]: Defaulting to hostname 'linux'. Feb 13 20:43:48.822622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:43:48.840041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:48.914377 kernel: SCSI subsystem initialized Feb 13 20:43:48.923377 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:43:48.934436 kernel: iscsi: registered transport (tcp) Feb 13 20:43:48.953063 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:43:48.953092 kernel: QLogic iSCSI HBA Driver Feb 13 20:43:48.987898 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:49.005644 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:43:49.035036 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:43:49.035082 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:43:49.035377 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:43:49.092390 kernel: raid6: neonx8 gen() 15795 MB/s Feb 13 20:43:49.112374 kernel: raid6: neonx4 gen() 15665 MB/s Feb 13 20:43:49.132372 kernel: raid6: neonx2 gen() 13243 MB/s Feb 13 20:43:49.153375 kernel: raid6: neonx1 gen() 10469 MB/s Feb 13 20:43:49.173373 kernel: raid6: int64x8 gen() 6963 MB/s Feb 13 20:43:49.193372 kernel: raid6: int64x4 gen() 7356 MB/s Feb 13 20:43:49.214376 kernel: raid6: int64x2 gen() 6121 MB/s Feb 13 20:43:49.238368 kernel: raid6: int64x1 gen() 5061 MB/s Feb 13 20:43:49.238379 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Feb 13 20:43:49.263681 kernel: raid6: .... xor() 11933 MB/s, rmw enabled Feb 13 20:43:49.263692 kernel: raid6: using neon recovery algorithm Feb 13 20:43:49.277018 kernel: xor: measuring software checksum speed Feb 13 20:43:49.277033 kernel: 8regs : 19797 MB/sec Feb 13 20:43:49.286880 kernel: 32regs : 18620 MB/sec Feb 13 20:43:49.286892 kernel: arm64_neon : 27052 MB/sec Feb 13 20:43:49.291809 kernel: xor: using function: arm64_neon (27052 MB/sec) Feb 13 20:43:49.342377 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:43:49.353006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:49.372548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:49.397437 systemd-udevd[436]: Using default interface naming scheme 'v255'. Feb 13 20:43:49.403115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:49.423683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:43:49.442788 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Feb 13 20:43:49.476436 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:49.494668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:43:49.533300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:49.563638 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:43:49.592872 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:49.609184 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:49.623210 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:49.652735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:43:49.674406 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:43:49.683694 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 20:43:49.695187 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:49.725086 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:49.814359 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:43:49.814447 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:43:49.814458 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:43:49.814467 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:43:49.814494 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:43:49.814506 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 13 20:43:49.814516 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 13 20:43:49.814526 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:43:49.814676 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:43:49.725265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:49.750127 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:49.839986 kernel: PTP clock support registered Feb 13 20:43:49.840011 kernel: scsi host1: storvsc_host_t Feb 13 20:43:49.787018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:49.908532 kernel: scsi host0: storvsc_host_t Feb 13 20:43:49.908726 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:43:49.908826 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:43:49.908916 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: VF slot 1 added Feb 13 20:43:49.909007 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:43:49.909017 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:43:49.787248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:49.952069 kernel: hv_pci 94cb39e2-6125-4b17-90e2-0c63fd4590cc: PCI VMBus probing: Using version 0x10004 Feb 13 20:43:50.164484 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:43:50.164518 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:43:50.164660 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:43:50.164671 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:43:50.164681 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:43:50.164693 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:43:50.164811 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:43:50.164823 kernel: hv_pci 94cb39e2-6125-4b17-90e2-0c63fd4590cc: PCI host bridge to bus 6125:00 Feb 13 20:43:50.164915 kernel: pci_bus 6125:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 20:43:50.165038 kernel: pci_bus 6125:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:43:50.165125 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:43:50.165217 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:43:50.165297 kernel: pci 6125:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 20:43:50.165396 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:43:50.165476 kernel: pci 6125:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:43:50.165563 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:43:50.165644 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:43:50.165725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:50.165735 kernel: pci 6125:00:02.0: enabling Extended Tags Feb 13 20:43:50.165814 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:43:50.165897 kernel: pci 6125:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6125:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 20:43:50.165978 kernel: pci_bus 6125:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:43:50.166078 kernel: pci 6125:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:43:49.806733 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:49.836481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:49.875562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:49.875672 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:49.915648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:50.014987 systemd-resolved[257]: Clock change detected. Flushing caches. Feb 13 20:43:50.036358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:50.087809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:50.165181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:50.259645 kernel: mlx5_core 6125:00:02.0: enabling device (0000 -> 0002) Feb 13 20:43:50.553408 kernel: mlx5_core 6125:00:02.0: firmware version: 16.30.1284 Feb 13 20:43:50.553557 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (483) Feb 13 20:43:50.553569 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (497) Feb 13 20:43:50.553579 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:50.553589 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:50.553598 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: VF registering: eth1 Feb 13 20:43:50.553701 kernel: mlx5_core 6125:00:02.0 eth1: joined to eth0 Feb 13 20:43:50.553792 kernel: mlx5_core 6125:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 20:43:50.301609 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:43:50.345976 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:43:50.579684 kernel: mlx5_core 6125:00:02.0 enP24869s1: renamed from eth1 Feb 13 20:43:50.373290 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:43:50.398502 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:43:50.406504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:43:50.419178 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:43:51.460085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:51.460268 disk-uuid[599]: The operation has completed successfully. Feb 13 20:43:51.522088 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:43:51.524040 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:43:51.556175 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:43:51.573480 sh[689]: Success Feb 13 20:43:51.595071 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:43:51.658795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:43:51.684176 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:43:51.690586 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:43:51.727794 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:43:51.727827 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:51.727838 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:43:51.741258 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:43:51.745706 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:43:51.805564 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:43:51.811765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:43:51.835272 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:43:51.843790 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:43:51.889550 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:51.889617 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:51.896856 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:51.907062 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:51.923175 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:43:51.929397 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:51.936396 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:43:51.947372 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:51.970233 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:43:51.984222 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:43:52.015548 systemd-networkd[873]: lo: Link UP Feb 13 20:43:52.015561 systemd-networkd[873]: lo: Gained carrier Feb 13 20:43:52.017668 systemd-networkd[873]: Enumeration completed Feb 13 20:43:52.017865 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:43:52.022145 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:52.022149 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:43:52.031808 systemd[1]: Reached target network.target - Network. Feb 13 20:43:52.123036 kernel: mlx5_core 6125:00:02.0 enP24869s1: Link up Feb 13 20:43:52.164107 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: Data path switched to VF: enP24869s1 Feb 13 20:43:52.164453 systemd-networkd[873]: enP24869s1: Link UP Feb 13 20:43:52.164683 systemd-networkd[873]: eth0: Link UP Feb 13 20:43:52.165125 systemd-networkd[873]: eth0: Gained carrier Feb 13 20:43:52.165135 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:52.173645 systemd-networkd[873]: enP24869s1: Gained carrier Feb 13 20:43:52.201094 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:43:52.214143 ignition[872]: Ignition 2.19.0 Feb 13 20:43:52.214154 ignition[872]: Stage: fetch-offline Feb 13 20:43:52.219288 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:52.214189 ignition[872]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.214197 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.214295 ignition[872]: parsed url from cmdline: "" Feb 13 20:43:52.214298 ignition[872]: no config URL provided Feb 13 20:43:52.214302 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:52.249331 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:43:52.214308 ignition[872]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:52.214313 ignition[872]: failed to fetch config: resource requires networking Feb 13 20:43:52.215255 ignition[872]: Ignition finished successfully Feb 13 20:43:52.280028 ignition[884]: Ignition 2.19.0 Feb 13 20:43:52.280036 ignition[884]: Stage: fetch Feb 13 20:43:52.280311 ignition[884]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.280324 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.280441 ignition[884]: parsed url from cmdline: "" Feb 13 20:43:52.280445 ignition[884]: no config URL provided Feb 13 20:43:52.280493 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:52.280501 ignition[884]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:52.280526 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:43:52.377804 ignition[884]: GET result: OK Feb 13 20:43:52.377863 ignition[884]: config has been read from IMDS userdata Feb 13 20:43:52.377907 ignition[884]: parsing config with SHA512: a8d97d1afcdcdcedf28100ebba2e210ae2b1d154637fc3633176fa26cdd42367c78b0150d6d817f2f9b6ae42f13d89aabb15a2b45f1195e5a6bafbd4914c588f Feb 13 20:43:52.382080 unknown[884]: fetched base config from "system" Feb 13 20:43:52.382492 ignition[884]: fetch: fetch complete Feb 13 20:43:52.382089 unknown[884]: fetched base config from "system" Feb 13 20:43:52.382496 ignition[884]: fetch: fetch passed Feb 13 20:43:52.382095 unknown[884]: fetched user config from "azure" Feb 13 20:43:52.382544 ignition[884]: Ignition finished successfully Feb 13 20:43:52.388932 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:43:52.428464 ignition[890]: Ignition 2.19.0 Feb 13 20:43:52.408394 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:43:52.428471 ignition[890]: Stage: kargs Feb 13 20:43:52.439497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:43:52.428665 ignition[890]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.456379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:43:52.428674 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.483710 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:43:52.429727 ignition[890]: kargs: kargs passed Feb 13 20:43:52.490531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:52.429797 ignition[890]: Ignition finished successfully Feb 13 20:43:52.504830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:43:52.475235 ignition[897]: Ignition 2.19.0 Feb 13 20:43:52.518099 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:43:52.475242 ignition[897]: Stage: disks Feb 13 20:43:52.527937 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:43:52.475445 ignition[897]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.541109 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:43:52.475455 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.563283 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:43:52.476469 ignition[897]: disks: disks passed Feb 13 20:43:52.476521 ignition[897]: Ignition finished successfully Feb 13 20:43:52.634608 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:43:52.646736 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:43:52.667259 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:43:52.729039 kernel: EXT4-fs (sda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:43:52.730470 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:43:52.739902 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:43:52.762095 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:52.770187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:43:52.802270 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (917) Feb 13 20:43:52.802297 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:52.791245 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:43:52.836292 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:52.836319 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:52.821973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:43:52.865887 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:52.822092 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:52.868797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:52.882911 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:43:52.903314 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:43:53.019747 coreos-metadata[919]: Feb 13 20:43:53.019 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:43:53.031261 coreos-metadata[919]: Feb 13 20:43:53.031 INFO Fetch successful Feb 13 20:43:53.037284 coreos-metadata[919]: Feb 13 20:43:53.036 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:43:53.050327 coreos-metadata[919]: Feb 13 20:43:53.049 INFO Fetch successful Feb 13 20:43:53.057400 coreos-metadata[919]: Feb 13 20:43:53.055 INFO wrote hostname ci-4081.3.1-a-1c3e1e2868 to /sysroot/etc/hostname Feb 13 20:43:53.057769 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:53.100627 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:43:53.118026 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:43:53.125701 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:43:53.135087 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:43:53.209198 systemd-networkd[873]: eth0: Gained IPv6LL Feb 13 20:43:53.395780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:53.411217 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:43:53.422233 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:43:53.447085 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:53.446371 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:43:53.473643 ignition[1034]: INFO : Ignition 2.19.0 Feb 13 20:43:53.473643 ignition[1034]: INFO : Stage: mount Feb 13 20:43:53.496649 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:53.496649 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:53.496649 ignition[1034]: INFO : mount: mount passed Feb 13 20:43:53.496649 ignition[1034]: INFO : Ignition finished successfully Feb 13 20:43:53.479685 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:43:53.486559 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:43:53.514153 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:43:53.533239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:53.565208 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Feb 13 20:43:53.565249 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:53.579226 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:53.584077 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:53.591030 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:53.593297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:53.623039 ignition[1065]: INFO : Ignition 2.19.0 Feb 13 20:43:53.623039 ignition[1065]: INFO : Stage: files Feb 13 20:43:53.623039 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:53.623039 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:53.645965 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:43:53.645965 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:43:53.645965 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:43:53.654198 unknown[1065]: wrote ssh authorized keys file for user: core Feb 13 20:43:53.726357 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:43:53.726357 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:43:53.725732 systemd-networkd[873]: enP24869s1: Gained IPv6LL Feb 13 20:43:53.970375 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:43:54.178200 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:43:54.653633 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:43:54.840860 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.840860 ignition[1065]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:43:54.861744 ignition[1065]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: files passed Feb 13 20:43:54.874999 ignition[1065]: INFO : Ignition finished successfully Feb 13 20:43:54.888629 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:43:54.936275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:43:54.946231 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:43:54.976296 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:43:55.070068 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:55.070068 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:54.976393 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:43:55.107514 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:54.985981 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:55.003149 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:43:55.032273 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:43:55.071919 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:43:55.073045 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:43:55.087000 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:43:55.101559 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:43:55.113933 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:43:55.117256 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:43:55.192664 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:55.219276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:43:55.241657 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:43:55.241792 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:43:55.256076 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:55.269019 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:55.283486 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:43:55.295903 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:43:55.295981 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:55.314654 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:43:55.327800 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:43:55.339747 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:43:55.354119 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:55.366945 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:55.379619 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:43:55.391625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:55.404612 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:43:55.418690 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:43:55.430387 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:43:55.440337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:43:55.440425 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:55.455936 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:55.462972 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:55.476891 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:43:55.480040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:55.489500 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:43:55.489575 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:55.507946 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:43:55.508027 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:55.524229 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:43:55.524292 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:43:55.536747 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:43:55.536797 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:55.571247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:43:55.603821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:43:55.621393 ignition[1117]: INFO : Ignition 2.19.0 Feb 13 20:43:55.621393 ignition[1117]: INFO : Stage: umount Feb 13 20:43:55.621393 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:55.621393 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:55.621393 ignition[1117]: INFO : umount: umount passed Feb 13 20:43:55.621393 ignition[1117]: INFO : Ignition finished successfully Feb 13 20:43:55.614092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:43:55.614172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:55.630521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:43:55.630587 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:55.644490 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:43:55.645000 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:43:55.647050 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:43:55.660866 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:43:55.660971 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:43:55.672904 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:43:55.672966 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:43:55.681208 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:43:55.681281 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:43:55.690262 systemd[1]: Stopped target network.target - Network. Feb 13 20:43:55.701397 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:43:55.701465 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:55.716693 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:43:55.728782 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:43:55.736060 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:55.749929 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:43:55.762218 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:43:55.773446 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:43:55.773504 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:55.784804 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:43:55.784862 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:55.796567 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:43:55.796634 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:43:55.808993 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:43:55.809054 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:55.821039 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:43:55.833367 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:43:55.846078 systemd-networkd[873]: eth0: DHCPv6 lease lost Feb 13 20:43:55.850955 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:43:55.851154 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:43:55.865460 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:43:56.102199 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: Data path switched from VF: enP24869s1 Feb 13 20:43:55.865607 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:43:55.880337 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:43:55.880396 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:55.911238 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:43:55.923598 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:43:55.923682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:55.938897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:43:55.938956 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:55.951408 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:43:55.951461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:55.969195 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:43:55.969263 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:55.981961 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:55.998072 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:43:55.998564 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:43:56.018444 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:43:56.018584 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:56.033070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:43:56.033154 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:56.047535 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:43:56.047581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:56.058950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:43:56.059125 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:56.080651 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:43:56.080725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:56.102003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:56.102101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:56.117125 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:43:56.117198 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:56.140295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:43:56.405686 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Feb 13 20:43:56.160791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:43:56.160873 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:56.179250 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:43:56.179307 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:43:56.200634 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:43:56.200697 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:56.215849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:56.215910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:56.234666 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:43:56.234773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:43:56.245687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:43:56.245774 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:43:56.262578 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:43:56.294283 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:43:56.342815 systemd[1]: Switching root. Feb 13 20:43:56.512087 systemd-journald[217]: Journal stopped Feb 13 20:43:48.396469 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:43:48.396494 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:43:48.396502 kernel: KASLR enabled Feb 13 20:43:48.396508 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 20:43:48.396515 kernel: printk: bootconsole [pl11] enabled Feb 13 20:43:48.396521 kernel: efi: EFI v2.7 by EDK II Feb 13 20:43:48.396528 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Feb 13 20:43:48.396534 kernel: random: crng init done Feb 13 20:43:48.396540 kernel: ACPI: Early table checksum verification disabled Feb 13 20:43:48.396546 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 20:43:48.396553 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396559 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396566 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 20:43:48.396572 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396580 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396586 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396593 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396601 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396608 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396614 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 20:43:48.396621 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 20:43:48.396627 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 20:43:48.396633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 20:43:48.396640 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 20:43:48.396646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 20:43:48.396653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 20:43:48.396659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 20:43:48.396665 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 20:43:48.396673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 20:43:48.396680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 20:43:48.396686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 20:43:48.396693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 20:43:48.396699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 20:43:48.396706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 20:43:48.396712 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Feb 13 20:43:48.396718 kernel: Zone ranges: Feb 13 20:43:48.396725 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 20:43:48.396731 kernel: DMA32 empty Feb 13 20:43:48.396737 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:43:48.396744 kernel: Movable zone start for each node Feb 13 20:43:48.396754 kernel: Early memory node ranges Feb 13 20:43:48.396761 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 20:43:48.396768 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Feb 13 20:43:48.396774 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 20:43:48.396781 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 20:43:48.396789 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 20:43:48.396796 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 20:43:48.396803 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 20:43:48.396810 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 20:43:48.396816 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 20:43:48.396823 kernel: psci: probing for conduit method from ACPI. Feb 13 20:43:48.396830 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:43:48.396836 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:43:48.396843 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 20:43:48.396850 kernel: psci: SMC Calling Convention v1.4 Feb 13 20:43:48.396857 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 20:43:48.396863 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 20:43:48.396872 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:43:48.396879 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:43:48.396886 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 20:43:48.396892 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:43:48.396899 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:43:48.396906 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:43:48.396913 kernel: CPU features: detected: Spectre-BHB Feb 13 20:43:48.396920 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:43:48.396926 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:43:48.396933 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:43:48.396940 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 20:43:48.396948 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:43:48.396955 kernel: alternatives: applying boot alternatives Feb 13 20:43:48.396963 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:43:48.396971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:43:48.396978 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:43:48.396985 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:43:48.396991 kernel: Fallback order for Node 0: 0 Feb 13 20:43:48.396998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 20:43:48.397005 kernel: Policy zone: Normal Feb 13 20:43:48.397011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:43:48.397018 kernel: software IO TLB: area num 2. Feb 13 20:43:48.397026 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Feb 13 20:43:48.397033 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Feb 13 20:43:48.397040 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:43:48.397047 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:43:48.397055 kernel: rcu: RCU event tracing is enabled. Feb 13 20:43:48.397062 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:43:48.397068 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:43:48.397075 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:43:48.397082 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:43:48.397089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:43:48.397096 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:43:48.397104 kernel: GICv3: 960 SPIs implemented Feb 13 20:43:48.397111 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:43:48.397117 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:43:48.397124 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:43:48.397131 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 20:43:48.397138 kernel: ITS: No ITS available, not enabling LPIs Feb 13 20:43:48.397145 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:43:48.397152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:43:48.397158 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:43:48.397165 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:43:48.397172 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:43:48.397181 kernel: Console: colour dummy device 80x25 Feb 13 20:43:48.397188 kernel: printk: console [tty1] enabled Feb 13 20:43:48.397195 kernel: ACPI: Core revision 20230628 Feb 13 20:43:48.397202 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:43:48.397209 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:43:48.397216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:43:48.397223 kernel: landlock: Up and running. Feb 13 20:43:48.397230 kernel: SELinux: Initializing. Feb 13 20:43:48.397237 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397244 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397252 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:48.397259 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:43:48.397267 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 20:43:48.397274 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 20:43:48.397281 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 20:43:48.397287 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:43:48.397295 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:43:48.397308 kernel: Remapping and enabling EFI services. Feb 13 20:43:48.397316 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:43:48.397323 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:43:48.397330 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 20:43:48.397339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:43:48.397347 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:43:48.397354 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:43:48.397376 kernel: SMP: Total of 2 processors activated. Feb 13 20:43:48.397386 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:43:48.397396 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 20:43:48.397403 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:43:48.397411 kernel: CPU features: detected: CRC32 instructions Feb 13 20:43:48.397418 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:43:48.397426 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:43:48.397433 kernel: CPU features: detected: Privileged Access Never Feb 13 20:43:48.397440 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:43:48.397448 kernel: alternatives: applying system-wide alternatives Feb 13 20:43:48.397455 kernel: devtmpfs: initialized Feb 13 20:43:48.397464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:43:48.397471 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:43:48.397479 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:43:48.397486 kernel: SMBIOS 3.1.0 present. Feb 13 20:43:48.397494 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 20:43:48.397501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:43:48.397509 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:43:48.397516 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:43:48.397524 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:43:48.397532 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:43:48.397540 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 20:43:48.397547 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:43:48.397554 kernel: cpuidle: using governor menu Feb 13 20:43:48.397562 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:43:48.397569 kernel: ASID allocator initialised with 32768 entries Feb 13 20:43:48.397576 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:43:48.397584 kernel: Serial: AMBA PL011 UART driver Feb 13 20:43:48.397591 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:43:48.397600 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:43:48.397607 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:43:48.397615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:43:48.397622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:43:48.397629 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:43:48.397637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:43:48.397644 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:43:48.397651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:43:48.397659 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:43:48.397667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:43:48.397675 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:43:48.397682 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:43:48.397690 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:43:48.397697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:43:48.397704 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:43:48.397712 kernel: ACPI: Interpreter enabled Feb 13 20:43:48.397719 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:43:48.397726 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:43:48.397735 kernel: printk: console [ttyAMA0] enabled Feb 13 20:43:48.397742 kernel: printk: bootconsole [pl11] disabled Feb 13 20:43:48.397750 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 20:43:48.397757 kernel: iommu: Default domain type: Translated Feb 13 20:43:48.397764 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:43:48.397772 kernel: efivars: Registered efivars operations Feb 13 20:43:48.397779 kernel: vgaarb: loaded Feb 13 20:43:48.397786 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:43:48.397794 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:43:48.397803 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:43:48.397810 kernel: pnp: PnP ACPI init Feb 13 20:43:48.397817 kernel: pnp: PnP ACPI: found 0 devices Feb 13 20:43:48.397824 kernel: NET: Registered PF_INET protocol family Feb 13 20:43:48.397832 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:43:48.397839 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:43:48.397847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:43:48.397854 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:43:48.397862 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:43:48.397870 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:43:48.397878 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397885 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:43:48.397893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:43:48.397900 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:43:48.397907 kernel: kvm [1]: HYP mode not available Feb 13 20:43:48.397915 kernel: Initialise system trusted keyrings Feb 13 20:43:48.397922 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:43:48.397929 kernel: Key type asymmetric registered Feb 13 20:43:48.397938 kernel: Asymmetric key parser 'x509' registered Feb 13 20:43:48.397945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:43:48.397953 kernel: io scheduler mq-deadline registered Feb 13 20:43:48.397960 kernel: io scheduler kyber registered Feb 13 20:43:48.397968 kernel: io scheduler bfq registered Feb 13 20:43:48.397975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:43:48.397982 kernel: thunder_xcv, ver 1.0 Feb 13 20:43:48.397989 kernel: thunder_bgx, ver 1.0 Feb 13 20:43:48.397997 kernel: nicpf, ver 1.0 Feb 13 20:43:48.398004 kernel: nicvf, ver 1.0 Feb 13 20:43:48.398162 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:43:48.398237 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:43:47 UTC (1739479427) Feb 13 20:43:48.398247 kernel: efifb: probing for efifb Feb 13 20:43:48.398255 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 20:43:48.398262 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 20:43:48.398269 kernel: efifb: scrolling: redraw Feb 13 20:43:48.398277 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 20:43:48.398287 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:43:48.398294 kernel: fb0: EFI VGA frame buffer device Feb 13 20:43:48.398302 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 20:43:48.398309 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:43:48.398317 kernel: No ACPI PMU IRQ for CPU0 Feb 13 20:43:48.398324 kernel: No ACPI PMU IRQ for CPU1 Feb 13 20:43:48.398331 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 20:43:48.398338 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:43:48.398346 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:43:48.398355 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:43:48.398375 kernel: Segment Routing with IPv6 Feb 13 20:43:48.398383 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:43:48.398391 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:43:48.398398 kernel: Key type dns_resolver registered Feb 13 20:43:48.398405 kernel: registered taskstats version 1 Feb 13 20:43:48.398412 kernel: Loading compiled-in X.509 certificates Feb 13 20:43:48.398420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:43:48.398427 kernel: Key type .fscrypt registered Feb 13 20:43:48.398436 kernel: Key type fscrypt-provisioning registered Feb 13 20:43:48.398443 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:43:48.398451 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:43:48.398458 kernel: ima: No architecture policies found Feb 13 20:43:48.398465 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:43:48.398473 kernel: clk: Disabling unused clocks Feb 13 20:43:48.398480 kernel: Freeing unused kernel memory: 39360K Feb 13 20:43:48.398487 kernel: Run /init as init process Feb 13 20:43:48.398494 kernel: with arguments: Feb 13 20:43:48.398503 kernel: /init Feb 13 20:43:48.398510 kernel: with environment: Feb 13 20:43:48.398517 kernel: HOME=/ Feb 13 20:43:48.398524 kernel: TERM=linux Feb 13 20:43:48.398532 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:43:48.398541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:43:48.398551 systemd[1]: Detected virtualization microsoft. Feb 13 20:43:48.398558 systemd[1]: Detected architecture arm64. Feb 13 20:43:48.398568 systemd[1]: Running in initrd. Feb 13 20:43:48.398575 systemd[1]: No hostname configured, using default hostname. Feb 13 20:43:48.398583 systemd[1]: Hostname set to . Feb 13 20:43:48.398591 systemd[1]: Initializing machine ID from random generator. Feb 13 20:43:48.398599 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:43:48.398607 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:48.398615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:48.398623 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:43:48.398633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:43:48.398641 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:43:48.398649 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:43:48.398658 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:43:48.398667 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:43:48.398674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:48.398684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:48.398692 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:43:48.398699 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:43:48.398707 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:43:48.398715 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:43:48.398723 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:48.398731 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:48.398739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:43:48.398747 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:43:48.398756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:48.398764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:48.398772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:48.398780 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:43:48.398788 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:43:48.398796 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:43:48.398804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:43:48.398811 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:43:48.398819 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:43:48.398829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:43:48.398855 systemd-journald[217]: Collecting audit messages is disabled. Feb 13 20:43:48.398875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:48.398884 systemd-journald[217]: Journal started Feb 13 20:43:48.398905 systemd-journald[217]: Runtime Journal (/run/log/journal/ee5977e288eb4ea788114cef6a2a52c7) is 8.0M, max 78.5M, 70.5M free. Feb 13 20:43:48.410873 systemd-modules-load[218]: Inserted module 'overlay' Feb 13 20:43:48.428145 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:43:48.428647 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:48.443112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:48.468451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:43:48.468516 kernel: Bridge firewalling registered Feb 13 20:43:48.471213 systemd-modules-load[218]: Inserted module 'br_netfilter' Feb 13 20:43:48.472736 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:43:48.482180 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:48.493503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:48.517805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:48.531560 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:43:48.550279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:43:48.562593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:43:48.597547 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:48.618056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:48.626624 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:43:48.648555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:43:48.663563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:43:48.689053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:48.720286 dracut-cmdline[248]: dracut-dracut-053 Feb 13 20:43:48.734753 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:43:48.720925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:43:48.729254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:48.814157 systemd-resolved[257]: Positive Trust Anchors: Feb 13 20:43:48.814171 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:43:48.814203 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:43:48.821628 systemd-resolved[257]: Defaulting to hostname 'linux'. Feb 13 20:43:48.822622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:43:48.840041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:48.914377 kernel: SCSI subsystem initialized Feb 13 20:43:48.923377 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:43:48.934436 kernel: iscsi: registered transport (tcp) Feb 13 20:43:48.953063 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:43:48.953092 kernel: QLogic iSCSI HBA Driver Feb 13 20:43:48.987898 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:49.005644 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:43:49.035036 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:43:49.035082 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:43:49.035377 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:43:49.092390 kernel: raid6: neonx8 gen() 15795 MB/s Feb 13 20:43:49.112374 kernel: raid6: neonx4 gen() 15665 MB/s Feb 13 20:43:49.132372 kernel: raid6: neonx2 gen() 13243 MB/s Feb 13 20:43:49.153375 kernel: raid6: neonx1 gen() 10469 MB/s Feb 13 20:43:49.173373 kernel: raid6: int64x8 gen() 6963 MB/s Feb 13 20:43:49.193372 kernel: raid6: int64x4 gen() 7356 MB/s Feb 13 20:43:49.214376 kernel: raid6: int64x2 gen() 6121 MB/s Feb 13 20:43:49.238368 kernel: raid6: int64x1 gen() 5061 MB/s Feb 13 20:43:49.238379 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Feb 13 20:43:49.263681 kernel: raid6: .... xor() 11933 MB/s, rmw enabled Feb 13 20:43:49.263692 kernel: raid6: using neon recovery algorithm Feb 13 20:43:49.277018 kernel: xor: measuring software checksum speed Feb 13 20:43:49.277033 kernel: 8regs : 19797 MB/sec Feb 13 20:43:49.286880 kernel: 32regs : 18620 MB/sec Feb 13 20:43:49.286892 kernel: arm64_neon : 27052 MB/sec Feb 13 20:43:49.291809 kernel: xor: using function: arm64_neon (27052 MB/sec) Feb 13 20:43:49.342377 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:43:49.353006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:49.372548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:49.397437 systemd-udevd[436]: Using default interface naming scheme 'v255'. Feb 13 20:43:49.403115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:49.423683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:43:49.442788 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Feb 13 20:43:49.476436 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:49.494668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:43:49.533300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:49.563638 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:43:49.592872 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:49.609184 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:49.623210 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:49.652735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:43:49.674406 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:43:49.683694 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 20:43:49.695187 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:49.725086 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:49.814359 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:43:49.814447 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 20:43:49.814458 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 20:43:49.814467 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:43:49.814494 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 20:43:49.814506 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 13 20:43:49.814516 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 13 20:43:49.814526 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 20:43:49.814676 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 20:43:49.725265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:49.750127 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:49.839986 kernel: PTP clock support registered Feb 13 20:43:49.840011 kernel: scsi host1: storvsc_host_t Feb 13 20:43:49.787018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:49.908532 kernel: scsi host0: storvsc_host_t Feb 13 20:43:49.908726 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 20:43:49.908826 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 20:43:49.908916 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: VF slot 1 added Feb 13 20:43:49.909007 kernel: hv_vmbus: registering driver hv_pci Feb 13 20:43:49.909017 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 20:43:49.787248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:49.952069 kernel: hv_pci 94cb39e2-6125-4b17-90e2-0c63fd4590cc: PCI VMBus probing: Using version 0x10004 Feb 13 20:43:50.164484 kernel: hv_vmbus: registering driver hv_utils Feb 13 20:43:50.164518 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 20:43:50.164660 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 20:43:50.164671 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:43:50.164681 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 20:43:50.164693 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 20:43:50.164811 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 20:43:50.164823 kernel: hv_pci 94cb39e2-6125-4b17-90e2-0c63fd4590cc: PCI host bridge to bus 6125:00 Feb 13 20:43:50.164915 kernel: pci_bus 6125:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 20:43:50.165038 kernel: pci_bus 6125:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 20:43:50.165125 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 20:43:50.165217 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:43:50.165297 kernel: pci 6125:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 20:43:50.165396 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:43:50.165476 kernel: pci 6125:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:43:50.165563 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 20:43:50.165644 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 20:43:50.165725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:50.165735 kernel: pci 6125:00:02.0: enabling Extended Tags Feb 13 20:43:50.165814 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:43:50.165897 kernel: pci 6125:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6125:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 20:43:50.165978 kernel: pci_bus 6125:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 20:43:50.166078 kernel: pci 6125:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 20:43:49.806733 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:49.836481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:49.875562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:49.875672 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:49.915648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:43:50.014987 systemd-resolved[257]: Clock change detected. Flushing caches. Feb 13 20:43:50.036358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:50.087809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:43:50.165181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:50.259645 kernel: mlx5_core 6125:00:02.0: enabling device (0000 -> 0002) Feb 13 20:43:50.553408 kernel: mlx5_core 6125:00:02.0: firmware version: 16.30.1284 Feb 13 20:43:50.553557 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (483) Feb 13 20:43:50.553569 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (497) Feb 13 20:43:50.553579 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:50.553589 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:50.553598 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: VF registering: eth1 Feb 13 20:43:50.553701 kernel: mlx5_core 6125:00:02.0 eth1: joined to eth0 Feb 13 20:43:50.553792 kernel: mlx5_core 6125:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 20:43:50.301609 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 20:43:50.345976 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 20:43:50.579684 kernel: mlx5_core 6125:00:02.0 enP24869s1: renamed from eth1 Feb 13 20:43:50.373290 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:43:50.398502 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 20:43:50.406504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 20:43:50.419178 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:43:51.460085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:43:51.460268 disk-uuid[599]: The operation has completed successfully. Feb 13 20:43:51.522088 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:43:51.524040 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:43:51.556175 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:43:51.573480 sh[689]: Success Feb 13 20:43:51.595071 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:43:51.658795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:43:51.684176 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:43:51.690586 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:43:51.727794 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:43:51.727827 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:51.727838 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:43:51.741258 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:43:51.745706 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:43:51.805564 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:43:51.811765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:43:51.835272 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:43:51.843790 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:43:51.889550 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:51.889617 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:51.896856 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:51.907062 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:51.923175 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:43:51.929397 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:51.936396 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:43:51.947372 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:51.970233 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:43:51.984222 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:43:52.015548 systemd-networkd[873]: lo: Link UP Feb 13 20:43:52.015561 systemd-networkd[873]: lo: Gained carrier Feb 13 20:43:52.017668 systemd-networkd[873]: Enumeration completed Feb 13 20:43:52.017865 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:43:52.022145 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:52.022149 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:43:52.031808 systemd[1]: Reached target network.target - Network. Feb 13 20:43:52.123036 kernel: mlx5_core 6125:00:02.0 enP24869s1: Link up Feb 13 20:43:52.164107 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: Data path switched to VF: enP24869s1 Feb 13 20:43:52.164453 systemd-networkd[873]: enP24869s1: Link UP Feb 13 20:43:52.164683 systemd-networkd[873]: eth0: Link UP Feb 13 20:43:52.165125 systemd-networkd[873]: eth0: Gained carrier Feb 13 20:43:52.165135 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:43:52.173645 systemd-networkd[873]: enP24869s1: Gained carrier Feb 13 20:43:52.201094 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:43:52.214143 ignition[872]: Ignition 2.19.0 Feb 13 20:43:52.214154 ignition[872]: Stage: fetch-offline Feb 13 20:43:52.219288 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:52.214189 ignition[872]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.214197 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.214295 ignition[872]: parsed url from cmdline: "" Feb 13 20:43:52.214298 ignition[872]: no config URL provided Feb 13 20:43:52.214302 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:52.249331 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:43:52.214308 ignition[872]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:52.214313 ignition[872]: failed to fetch config: resource requires networking Feb 13 20:43:52.215255 ignition[872]: Ignition finished successfully Feb 13 20:43:52.280028 ignition[884]: Ignition 2.19.0 Feb 13 20:43:52.280036 ignition[884]: Stage: fetch Feb 13 20:43:52.280311 ignition[884]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.280324 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.280441 ignition[884]: parsed url from cmdline: "" Feb 13 20:43:52.280445 ignition[884]: no config URL provided Feb 13 20:43:52.280493 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:43:52.280501 ignition[884]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:43:52.280526 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 20:43:52.377804 ignition[884]: GET result: OK Feb 13 20:43:52.377863 ignition[884]: config has been read from IMDS userdata Feb 13 20:43:52.377907 ignition[884]: parsing config with SHA512: a8d97d1afcdcdcedf28100ebba2e210ae2b1d154637fc3633176fa26cdd42367c78b0150d6d817f2f9b6ae42f13d89aabb15a2b45f1195e5a6bafbd4914c588f Feb 13 20:43:52.382080 unknown[884]: fetched base config from "system" Feb 13 20:43:52.382492 ignition[884]: fetch: fetch complete Feb 13 20:43:52.382089 unknown[884]: fetched base config from "system" Feb 13 20:43:52.382496 ignition[884]: fetch: fetch passed Feb 13 20:43:52.382095 unknown[884]: fetched user config from "azure" Feb 13 20:43:52.382544 ignition[884]: Ignition finished successfully Feb 13 20:43:52.388932 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:43:52.428464 ignition[890]: Ignition 2.19.0 Feb 13 20:43:52.408394 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:43:52.428471 ignition[890]: Stage: kargs Feb 13 20:43:52.439497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:43:52.428665 ignition[890]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.456379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:43:52.428674 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.483710 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:43:52.429727 ignition[890]: kargs: kargs passed Feb 13 20:43:52.490531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:52.429797 ignition[890]: Ignition finished successfully Feb 13 20:43:52.504830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:43:52.475235 ignition[897]: Ignition 2.19.0 Feb 13 20:43:52.518099 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:43:52.475242 ignition[897]: Stage: disks Feb 13 20:43:52.527937 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:43:52.475445 ignition[897]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:52.541109 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:43:52.475455 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:52.563283 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:43:52.476469 ignition[897]: disks: disks passed Feb 13 20:43:52.476521 ignition[897]: Ignition finished successfully Feb 13 20:43:52.634608 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 20:43:52.646736 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:43:52.667259 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:43:52.729039 kernel: EXT4-fs (sda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:43:52.730470 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:43:52.739902 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:43:52.762095 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:52.770187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:43:52.802270 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (917) Feb 13 20:43:52.802297 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:52.791245 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:43:52.836292 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:52.836319 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:52.821973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:43:52.865887 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:52.822092 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:52.868797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:52.882911 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:43:52.903314 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:43:53.019747 coreos-metadata[919]: Feb 13 20:43:53.019 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:43:53.031261 coreos-metadata[919]: Feb 13 20:43:53.031 INFO Fetch successful Feb 13 20:43:53.037284 coreos-metadata[919]: Feb 13 20:43:53.036 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:43:53.050327 coreos-metadata[919]: Feb 13 20:43:53.049 INFO Fetch successful Feb 13 20:43:53.057400 coreos-metadata[919]: Feb 13 20:43:53.055 INFO wrote hostname ci-4081.3.1-a-1c3e1e2868 to /sysroot/etc/hostname Feb 13 20:43:53.057769 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:53.100627 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:43:53.118026 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:43:53.125701 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:43:53.135087 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:43:53.209198 systemd-networkd[873]: eth0: Gained IPv6LL Feb 13 20:43:53.395780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:53.411217 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:43:53.422233 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:43:53.447085 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:53.446371 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:43:53.473643 ignition[1034]: INFO : Ignition 2.19.0 Feb 13 20:43:53.473643 ignition[1034]: INFO : Stage: mount Feb 13 20:43:53.496649 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:53.496649 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:53.496649 ignition[1034]: INFO : mount: mount passed Feb 13 20:43:53.496649 ignition[1034]: INFO : Ignition finished successfully Feb 13 20:43:53.479685 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:43:53.486559 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:43:53.514153 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:43:53.533239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:43:53.565208 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Feb 13 20:43:53.565249 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:43:53.579226 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:43:53.584077 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:43:53.591030 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:43:53.593297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:43:53.623039 ignition[1065]: INFO : Ignition 2.19.0 Feb 13 20:43:53.623039 ignition[1065]: INFO : Stage: files Feb 13 20:43:53.623039 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:53.623039 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:53.645965 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:43:53.645965 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:43:53.645965 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:43:53.671956 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:43:53.654198 unknown[1065]: wrote ssh authorized keys file for user: core Feb 13 20:43:53.726357 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:43:53.726357 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:43:53.725732 systemd-networkd[873]: enP24869s1: Gained IPv6LL Feb 13 20:43:53.970375 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:43:54.178200 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.192360 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:43:54.653633 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:43:54.840860 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:43:54.840860 ignition[1065]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:43:54.861744 ignition[1065]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:43:54.874999 ignition[1065]: INFO : files: files passed Feb 13 20:43:54.874999 ignition[1065]: INFO : Ignition finished successfully Feb 13 20:43:54.888629 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:43:54.936275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:43:54.946231 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:43:54.976296 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:43:55.070068 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:55.070068 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:54.976393 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:43:55.107514 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:43:54.985981 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:55.003149 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:43:55.032273 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:43:55.071919 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:43:55.073045 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:43:55.087000 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:43:55.101559 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:43:55.113933 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:43:55.117256 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:43:55.192664 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:55.219276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:43:55.241657 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:43:55.241792 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:43:55.256076 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:43:55.269019 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:55.283486 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:43:55.295903 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:43:55.295981 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:43:55.314654 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:43:55.327800 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:43:55.339747 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:43:55.354119 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:43:55.366945 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:43:55.379619 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:43:55.391625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:43:55.404612 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:43:55.418690 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:43:55.430387 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:43:55.440337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:43:55.440425 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:43:55.455936 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:43:55.462972 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:55.476891 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:43:55.480040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:55.489500 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:43:55.489575 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:43:55.507946 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:43:55.508027 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:43:55.524229 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:43:55.524292 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:43:55.536747 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:43:55.536797 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:43:55.571247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:43:55.603821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:43:55.621393 ignition[1117]: INFO : Ignition 2.19.0 Feb 13 20:43:55.621393 ignition[1117]: INFO : Stage: umount Feb 13 20:43:55.621393 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:43:55.621393 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 20:43:55.621393 ignition[1117]: INFO : umount: umount passed Feb 13 20:43:55.621393 ignition[1117]: INFO : Ignition finished successfully Feb 13 20:43:55.614092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:43:55.614172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:55.630521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:43:55.630587 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:43:55.644490 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:43:55.645000 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:43:55.647050 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:43:55.660866 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:43:55.660971 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:43:55.672904 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:43:55.672966 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:43:55.681208 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:43:55.681281 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:43:55.690262 systemd[1]: Stopped target network.target - Network. Feb 13 20:43:55.701397 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:43:55.701465 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:43:55.716693 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:43:55.728782 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:43:55.736060 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:55.749929 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:43:55.762218 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:43:55.773446 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:43:55.773504 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:43:55.784804 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:43:55.784862 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:43:55.796567 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:43:55.796634 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:43:55.808993 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:43:55.809054 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:43:55.821039 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:43:55.833367 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:43:55.846078 systemd-networkd[873]: eth0: DHCPv6 lease lost Feb 13 20:43:55.850955 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:43:55.851154 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:43:55.865460 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:43:56.102199 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: Data path switched from VF: enP24869s1 Feb 13 20:43:55.865607 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:43:55.880337 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:43:55.880396 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:55.911238 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:43:55.923598 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:43:55.923682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:43:55.938897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:43:55.938956 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:55.951408 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:43:55.951461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:55.969195 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:43:55.969263 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:43:55.981961 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:55.998072 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:43:55.998564 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:43:56.018444 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:43:56.018584 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:56.033070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:43:56.033154 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:56.047535 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:43:56.047581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:56.058950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:43:56.059125 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:43:56.080651 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:43:56.080725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:43:56.102003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:43:56.102101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:43:56.117125 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:43:56.117198 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:43:56.140295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:43:56.405686 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Feb 13 20:43:56.160791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:43:56.160873 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:56.179250 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:43:56.179307 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:43:56.200634 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:43:56.200697 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:56.215849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:43:56.215910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:43:56.234666 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:43:56.234773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:43:56.245687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:43:56.245774 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:43:56.262578 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:43:56.294283 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:43:56.342815 systemd[1]: Switching root. Feb 13 20:43:56.512087 systemd-journald[217]: Journal stopped Feb 13 20:43:58.734189 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:43:58.734217 kernel: SELinux: policy capability open_perms=1 Feb 13 20:43:58.734227 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:43:58.734235 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:43:58.734245 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:43:58.734253 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:43:58.734262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:43:58.734270 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:43:58.734277 kernel: audit: type=1403 audit(1739479437.149:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:43:58.734287 systemd[1]: Successfully loaded SELinux policy in 87.124ms. Feb 13 20:43:58.734300 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.066ms. Feb 13 20:43:58.734310 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:43:58.734319 systemd[1]: Detected virtualization microsoft. Feb 13 20:43:58.734328 systemd[1]: Detected architecture arm64. Feb 13 20:43:58.734337 systemd[1]: Detected first boot. Feb 13 20:43:58.734348 systemd[1]: Hostname set to . Feb 13 20:43:58.734357 systemd[1]: Initializing machine ID from random generator. Feb 13 20:43:58.734366 zram_generator::config[1175]: No configuration found. Feb 13 20:43:58.734375 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:43:58.734384 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:43:58.734393 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:43:58.734403 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:43:58.734413 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:43:58.734422 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:43:58.734432 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:43:58.734441 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:43:58.734450 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:43:58.734459 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:43:58.734468 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:43:58.734479 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:43:58.734488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:43:58.734498 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:43:58.734507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:43:58.734517 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:43:58.734526 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:43:58.734535 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:43:58.734544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:43:58.734555 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:43:58.734564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:43:58.734573 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:43:58.734585 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:43:58.734594 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:43:58.734604 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:43:58.734613 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:43:58.734622 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:43:58.734633 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:43:58.734643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:43:58.734652 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:43:58.734662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:43:58.734671 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:43:58.734683 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:43:58.734693 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:43:58.734703 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:43:58.734712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:43:58.734722 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:43:58.734731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:43:58.734741 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:43:58.734750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:43:58.734762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:43:58.734771 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:43:58.734781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:43:58.734790 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:43:58.734800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:43:58.734810 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:43:58.734819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:43:58.734829 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:43:58.734839 kernel: fuse: init (API version 7.39) Feb 13 20:43:58.734849 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:43:58.734859 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:43:58.734868 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:43:58.734877 kernel: ACPI: bus type drm_connector registered Feb 13 20:43:58.734886 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:43:58.734895 kernel: loop: module loaded Feb 13 20:43:58.734925 systemd-journald[1294]: Collecting audit messages is disabled. Feb 13 20:43:58.734949 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:43:58.734960 systemd-journald[1294]: Journal started Feb 13 20:43:58.734982 systemd-journald[1294]: Runtime Journal (/run/log/journal/a7e90ba0e3974cb8977695b75c0acf39) is 8.0M, max 78.5M, 70.5M free. Feb 13 20:43:58.764420 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:43:58.776035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:43:58.794797 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:43:58.795970 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:43:58.801728 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:43:58.807986 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:43:58.813637 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:43:58.819883 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:43:58.826618 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:43:58.832304 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:43:58.839811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:43:58.847224 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:43:58.847386 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:43:58.854605 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:43:58.854765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:43:58.861638 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:43:58.861786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:43:58.868851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:43:58.869001 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:43:58.876182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:43:58.876335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:43:58.882870 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:43:58.883086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:43:58.890062 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:43:58.896924 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:43:58.904622 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:43:58.912748 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:43:58.928518 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:43:58.942093 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:43:58.950946 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:43:58.959655 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:43:58.976180 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:43:58.984367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:43:58.991726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:43:58.995206 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:43:59.004283 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:43:59.005467 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:43:59.020152 systemd-journald[1294]: Time spent on flushing to /var/log/journal/a7e90ba0e3974cb8977695b75c0acf39 is 31.598ms for 884 entries. Feb 13 20:43:59.020152 systemd-journald[1294]: System Journal (/var/log/journal/a7e90ba0e3974cb8977695b75c0acf39) is 8.0M, max 2.6G, 2.6G free. Feb 13 20:43:59.115855 systemd-journald[1294]: Received client request to flush runtime journal. Feb 13 20:43:59.016404 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:43:59.035478 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:43:59.044839 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:43:59.053044 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:43:59.064654 udevadm[1334]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:43:59.069801 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:43:59.079639 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:43:59.106398 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:43:59.119354 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Feb 13 20:43:59.119365 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Feb 13 20:43:59.119817 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:43:59.129327 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:43:59.143283 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:43:59.222468 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:43:59.234650 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:43:59.249892 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Feb 13 20:43:59.250236 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Feb 13 20:43:59.257666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:43:59.753396 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:43:59.764247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:43:59.791918 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Feb 13 20:43:59.901589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:43:59.922207 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:43:59.968234 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:43:59.980471 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 20:44:00.040302 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:44:00.062306 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:44:00.113303 systemd-networkd[1375]: lo: Link UP Feb 13 20:44:00.113316 systemd-networkd[1375]: lo: Gained carrier Feb 13 20:44:00.115724 systemd-networkd[1375]: Enumeration completed Feb 13 20:44:00.115866 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:44:00.122861 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:44:00.122875 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:44:00.138568 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 20:44:00.138633 kernel: hv_vmbus: registering driver hv_balloon Feb 13 20:44:00.144440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:44:00.157459 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 20:44:00.157531 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 20:44:00.157544 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 20:44:00.161811 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 20:44:00.170366 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:44:00.179374 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:44:00.200037 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1364) Feb 13 20:44:00.238094 kernel: mlx5_core 6125:00:02.0 enP24869s1: Link up Feb 13 20:44:00.258540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:44:00.269069 kernel: hv_netvsc 000d3ac2-b678-000d-3ac2-b678000d3ac2 eth0: Data path switched to VF: enP24869s1 Feb 13 20:44:00.272363 systemd-networkd[1375]: enP24869s1: Link UP Feb 13 20:44:00.272452 systemd-networkd[1375]: eth0: Link UP Feb 13 20:44:00.272455 systemd-networkd[1375]: eth0: Gained carrier Feb 13 20:44:00.272471 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:44:00.276388 systemd-networkd[1375]: enP24869s1: Gained carrier Feb 13 20:44:00.282083 systemd-networkd[1375]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:44:00.325467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 20:44:00.342630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:44:00.342929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:44:00.349978 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:44:00.363206 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:44:00.371420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:44:00.393306 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:44:00.424139 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:44:00.436390 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:44:00.450368 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:44:00.462568 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:44:00.479941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:44:00.492206 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:44:00.500288 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:44:00.507806 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:44:00.507841 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:44:00.516440 systemd[1]: Reached target machines.target - Containers. Feb 13 20:44:00.523136 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:44:00.536240 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:44:00.546233 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:44:00.552287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:44:00.553482 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:44:00.564231 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:44:00.574282 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:44:00.586133 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:44:00.610543 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:44:00.624039 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 20:44:00.642713 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:44:00.643646 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:44:00.706031 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:44:00.727161 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 20:44:00.767452 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 20:44:00.851038 kernel: loop3: detected capacity change from 0 to 31320 Feb 13 20:44:00.948072 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 20:44:00.956032 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 20:44:00.965033 kernel: loop6: detected capacity change from 0 to 114328 Feb 13 20:44:00.976047 kernel: loop7: detected capacity change from 0 to 31320 Feb 13 20:44:00.984748 (sd-merge)[1479]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 20:44:00.985238 (sd-merge)[1479]: Merged extensions into '/usr'. Feb 13 20:44:00.990615 systemd[1]: Reloading requested from client PID 1466 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:44:00.990894 systemd[1]: Reloading... Feb 13 20:44:01.067048 zram_generator::config[1507]: No configuration found. Feb 13 20:44:01.196583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:44:01.265674 systemd[1]: Reloading finished in 274 ms. Feb 13 20:44:01.283106 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:44:01.296373 systemd[1]: Starting ensure-sysext.service... Feb 13 20:44:01.305313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:44:01.319280 systemd[1]: Reloading requested from client PID 1568 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:44:01.319297 systemd[1]: Reloading... Feb 13 20:44:01.338985 systemd-tmpfiles[1569]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:44:01.339693 systemd-tmpfiles[1569]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:44:01.340416 systemd-tmpfiles[1569]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:44:01.340666 systemd-tmpfiles[1569]: ACLs are not supported, ignoring. Feb 13 20:44:01.340719 systemd-tmpfiles[1569]: ACLs are not supported, ignoring. Feb 13 20:44:01.350139 systemd-tmpfiles[1569]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:44:01.350152 systemd-tmpfiles[1569]: Skipping /boot Feb 13 20:44:01.361718 systemd-tmpfiles[1569]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:44:01.361736 systemd-tmpfiles[1569]: Skipping /boot Feb 13 20:44:01.415044 zram_generator::config[1603]: No configuration found. Feb 13 20:44:01.536059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:44:01.606365 systemd[1]: Reloading finished in 286 ms. Feb 13 20:44:01.622299 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:44:01.642669 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:44:01.652214 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:44:01.668209 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:44:01.681720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:44:01.701317 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:44:01.714456 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:44:01.737668 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:44:01.747549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:44:01.750625 augenrules[1687]: No rules Feb 13 20:44:01.761495 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:44:01.781350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:44:01.801508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:44:01.809482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:44:01.810566 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:44:01.822157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:44:01.822331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:44:01.830166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:44:01.830334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:44:01.839458 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:44:01.847782 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:44:01.850948 systemd-resolved[1673]: Positive Trust Anchors: Feb 13 20:44:01.851332 systemd-resolved[1673]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:44:01.851368 systemd-resolved[1673]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:44:01.851692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:44:01.855254 systemd-resolved[1673]: Using system hostname 'ci-4081.3.1-a-1c3e1e2868'. Feb 13 20:44:01.861069 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:44:01.871494 systemd[1]: Reached target network.target - Network. Feb 13 20:44:01.876876 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:44:01.883950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:44:01.891563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:44:01.901063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:44:01.910478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:44:01.920448 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:44:01.920610 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:44:01.921553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:44:01.921741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:44:01.929525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:44:01.929688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:44:01.938212 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:44:01.938431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:44:01.950516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:44:01.956292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:44:01.963756 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:44:01.973543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:44:01.990589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:44:01.996502 ldconfig[1463]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:44:01.997634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:44:01.997841 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:44:02.004122 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:44:02.005279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:44:02.005472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:44:02.013377 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:44:02.013542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:44:02.020363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:44:02.027465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:44:02.027633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:44:02.035029 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:44:02.035229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:44:02.045555 systemd[1]: Finished ensure-sysext.service. Feb 13 20:44:02.052608 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:44:02.052693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:44:02.063240 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:44:02.077427 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:44:02.083782 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:44:02.089577 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:44:02.096352 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:44:02.103229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:44:02.108747 systemd-networkd[1375]: enP24869s1: Gained IPv6LL Feb 13 20:44:02.109394 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:44:02.116305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:44:02.123542 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:44:02.123586 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:44:02.128624 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:44:02.134701 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:44:02.142267 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:44:02.148751 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:44:02.157203 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:44:02.164672 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:44:02.170111 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:44:02.175315 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:44:02.175364 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:44:02.175383 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:44:02.177642 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 20:44:02.187190 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:44:02.205227 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:44:02.216221 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:44:02.227111 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:44:02.233661 systemd-networkd[1375]: eth0: Gained IPv6LL Feb 13 20:44:02.238580 (chronyd)[1739]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 20:44:02.242715 jq[1746]: false Feb 13 20:44:02.245249 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:44:02.253876 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:44:02.254546 chronyd[1751]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 20:44:02.253932 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 20:44:02.259259 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 20:44:02.265374 chronyd[1751]: Timezone right/UTC failed leap second check, ignoring Feb 13 20:44:02.265800 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 20:44:02.265607 chronyd[1751]: Loaded seccomp filter (level 2) Feb 13 20:44:02.270680 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:44:02.271890 KVP[1752]: KVP starting; pid is:1752 Feb 13 20:44:02.279140 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:44:02.288479 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:44:02.306711 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:44:02.312642 extend-filesystems[1748]: Found loop4 Feb 13 20:44:02.312642 extend-filesystems[1748]: Found loop5 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found loop6 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found loop7 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda1 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda2 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda3 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found usr Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda4 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda6 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda7 Feb 13 20:44:02.332752 extend-filesystems[1748]: Found sda9 Feb 13 20:44:02.332752 extend-filesystems[1748]: Checking size of /dev/sda9 Feb 13 20:44:02.518609 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1363) Feb 13 20:44:02.324603 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:44:02.320550 dbus-daemon[1743]: [system] SELinux support is enabled Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.420 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.444 INFO Fetch successful Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.444 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.454 INFO Fetch successful Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.454 INFO Fetching http://168.63.129.16/machine/f5bb5647-6f33-471e-89a0-bc166e2b1856/f457331a%2D3678%2D42f0%2D8164%2D5013d7934ebc.%5Fci%2D4081.3.1%2Da%2D1c3e1e2868?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.455 INFO Fetch successful Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.456 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 20:44:02.519147 coreos-metadata[1741]: Feb 13 20:44:02.471 INFO Fetch successful Feb 13 20:44:02.519396 extend-filesystems[1748]: Old size kept for /dev/sda9 Feb 13 20:44:02.519396 extend-filesystems[1748]: Found sr0 Feb 13 20:44:02.346682 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:44:02.351232 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:44:02.383341 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:44:02.541609 update_engine[1776]: I20250213 20:44:02.413098 1776 main.cc:92] Flatcar Update Engine starting Feb 13 20:44:02.541609 update_engine[1776]: I20250213 20:44:02.444160 1776 update_check_scheduler.cc:74] Next update check in 4m58s Feb 13 20:44:02.396108 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:44:02.541953 jq[1781]: true Feb 13 20:44:02.405434 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:44:02.424742 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 20:44:02.440811 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:44:02.441097 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:44:02.441357 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:44:02.441549 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:44:02.471395 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:44:02.471628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:44:02.511894 systemd-logind[1768]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 13 20:44:02.512122 systemd-logind[1768]: New seat seat0. Feb 13 20:44:02.513499 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:44:02.537384 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:44:02.537630 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:44:02.562726 kernel: hv_utils: KVP IC version 4.0 Feb 13 20:44:02.563063 KVP[1752]: KVP LIC Version: 3.1 Feb 13 20:44:02.590439 jq[1810]: true Feb 13 20:44:02.595376 (ntainerd)[1814]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:44:02.606740 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:44:02.634998 dbus-daemon[1743]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:44:02.645031 tar[1798]: linux-arm64/helm Feb 13 20:44:02.652055 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:44:02.664842 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:44:02.678234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:02.698417 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:44:02.703825 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:44:02.704065 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:44:02.704201 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:44:02.712243 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:44:02.712357 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:44:02.722141 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:44:02.730426 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:44:02.752437 bash[1849]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:44:02.756133 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:44:02.773336 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:44:02.791695 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:44:02.956595 locksmithd[1852]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:44:03.014479 containerd[1814]: time="2025-02-13T20:44:03.014371560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:44:03.102853 containerd[1814]: time="2025-02-13T20:44:03.096976000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:44:03.106944 containerd[1814]: time="2025-02-13T20:44:03.106875000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:44:03.106944 containerd[1814]: time="2025-02-13T20:44:03.106936920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:44:03.107089 containerd[1814]: time="2025-02-13T20:44:03.106958680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107144760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107171680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107234160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107246440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107457600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107473800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107487080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107496680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107561680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107763 containerd[1814]: time="2025-02-13T20:44:03.107754520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107959 containerd[1814]: time="2025-02-13T20:44:03.107878840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:44:03.107959 containerd[1814]: time="2025-02-13T20:44:03.107893560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:44:03.108000 containerd[1814]: time="2025-02-13T20:44:03.107961760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:44:03.108043 containerd[1814]: time="2025-02-13T20:44:03.107998040Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.124867400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.124952480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.124969760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.124985840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125001600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125199640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125524480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125620240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125636400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125649920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125683080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125703600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125716560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126046 containerd[1814]: time="2025-02-13T20:44:03.125731400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125746160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125760280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125778480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125792040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125811600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125826640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125839200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125853320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125865240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125877720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125891040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125910200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125923520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126391 containerd[1814]: time="2025-02-13T20:44:03.125938680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.125950160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.125961400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.125974680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.125991560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126053280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126071640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126083160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126137120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126155360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126166960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126178680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126188560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126201760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:44:03.126674 containerd[1814]: time="2025-02-13T20:44:03.126213080Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:44:03.127975 containerd[1814]: time="2025-02-13T20:44:03.126223480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:44:03.128003 containerd[1814]: time="2025-02-13T20:44:03.126517560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:44:03.128003 containerd[1814]: time="2025-02-13T20:44:03.126579640Z" level=info msg="Connect containerd service" Feb 13 20:44:03.128003 containerd[1814]: time="2025-02-13T20:44:03.126619320Z" level=info msg="using legacy CRI server" Feb 13 20:44:03.128003 containerd[1814]: time="2025-02-13T20:44:03.126625960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:44:03.128003 containerd[1814]: time="2025-02-13T20:44:03.126718520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131030960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131348320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131383120Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131435600Z" level=info msg="Start subscribing containerd event" Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131472760Z" level=info msg="Start recovering state" Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131536920Z" level=info msg="Start event monitor" Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131547480Z" level=info msg="Start snapshots syncer" Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131556560Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:44:03.133015 containerd[1814]: time="2025-02-13T20:44:03.131565640Z" level=info msg="Start streaming server" Feb 13 20:44:03.131743 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:44:03.138985 containerd[1814]: time="2025-02-13T20:44:03.138937160Z" level=info msg="containerd successfully booted in 0.128128s" Feb 13 20:44:03.210791 tar[1798]: linux-arm64/LICENSE Feb 13 20:44:03.210889 tar[1798]: linux-arm64/README.md Feb 13 20:44:03.230302 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:44:03.591198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:03.598427 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:04.030503 sshd_keygen[1775]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:44:04.063623 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:44:04.074576 kubelet[1891]: E0213 20:44:04.074524 1891 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:04.078679 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:44:04.087535 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 20:44:04.095581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:04.095833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:04.102815 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:44:04.103199 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:44:04.116661 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:44:04.130187 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 20:44:04.149074 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:44:04.162655 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:44:04.177687 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:44:04.187222 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:44:04.196406 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:44:04.206136 systemd[1]: Startup finished in 9.965s (kernel) + 7.142s (userspace) = 17.108s. Feb 13 20:44:04.298300 login[1927]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 20:44:04.300435 login[1928]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:04.314103 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:44:04.315603 systemd-logind[1768]: New session 2 of user core. Feb 13 20:44:04.326211 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:44:04.340988 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:44:04.350572 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:44:04.357967 (systemd)[1937]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:44:04.483723 systemd[1937]: Queued start job for default target default.target. Feb 13 20:44:04.484123 systemd[1937]: Created slice app.slice - User Application Slice. Feb 13 20:44:04.484142 systemd[1937]: Reached target paths.target - Paths. Feb 13 20:44:04.484153 systemd[1937]: Reached target timers.target - Timers. Feb 13 20:44:04.493175 systemd[1937]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:44:04.501408 systemd[1937]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:44:04.501482 systemd[1937]: Reached target sockets.target - Sockets. Feb 13 20:44:04.501494 systemd[1937]: Reached target basic.target - Basic System. Feb 13 20:44:04.501543 systemd[1937]: Reached target default.target - Main User Target. Feb 13 20:44:04.501572 systemd[1937]: Startup finished in 137ms. Feb 13 20:44:04.501674 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:44:04.509458 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:44:04.820986 waagent[1923]: 2025-02-13T20:44:04.820889Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 20:44:04.827332 waagent[1923]: 2025-02-13T20:44:04.827248Z INFO Daemon Daemon OS: flatcar 4081.3.1 Feb 13 20:44:04.832265 waagent[1923]: 2025-02-13T20:44:04.832200Z INFO Daemon Daemon Python: 3.11.9 Feb 13 20:44:04.837168 waagent[1923]: 2025-02-13T20:44:04.837098Z INFO Daemon Daemon Run daemon Feb 13 20:44:04.841689 waagent[1923]: 2025-02-13T20:44:04.841626Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.1' Feb 13 20:44:04.851084 waagent[1923]: 2025-02-13T20:44:04.851001Z INFO Daemon Daemon Using waagent for provisioning Feb 13 20:44:04.857160 waagent[1923]: 2025-02-13T20:44:04.857103Z INFO Daemon Daemon Activate resource disk Feb 13 20:44:04.861783 waagent[1923]: 2025-02-13T20:44:04.861714Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 20:44:04.873148 waagent[1923]: 2025-02-13T20:44:04.872938Z INFO Daemon Daemon Found device: None Feb 13 20:44:04.877708 waagent[1923]: 2025-02-13T20:44:04.877640Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 20:44:04.886141 waagent[1923]: 2025-02-13T20:44:04.886070Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 20:44:04.899267 waagent[1923]: 2025-02-13T20:44:04.899196Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:44:04.905167 waagent[1923]: 2025-02-13T20:44:04.905106Z INFO Daemon Daemon Running default provisioning handler Feb 13 20:44:04.919716 waagent[1923]: 2025-02-13T20:44:04.919090Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 20:44:04.933144 waagent[1923]: 2025-02-13T20:44:04.933071Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 20:44:04.942767 waagent[1923]: 2025-02-13T20:44:04.942696Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 20:44:04.947902 waagent[1923]: 2025-02-13T20:44:04.947836Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 20:44:04.999035 waagent[1923]: 2025-02-13T20:44:04.996155Z INFO Daemon Daemon Successfully mounted dvd Feb 13 20:44:05.019603 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 20:44:05.022075 waagent[1923]: 2025-02-13T20:44:05.021382Z INFO Daemon Daemon Detect protocol endpoint Feb 13 20:44:05.026427 waagent[1923]: 2025-02-13T20:44:05.026352Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 20:44:05.032155 waagent[1923]: 2025-02-13T20:44:05.032093Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 20:44:05.038807 waagent[1923]: 2025-02-13T20:44:05.038742Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 20:44:05.044341 waagent[1923]: 2025-02-13T20:44:05.044284Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 20:44:05.049684 waagent[1923]: 2025-02-13T20:44:05.049625Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 20:44:05.070371 waagent[1923]: 2025-02-13T20:44:05.070324Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 20:44:05.077369 waagent[1923]: 2025-02-13T20:44:05.077296Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 20:44:05.082665 waagent[1923]: 2025-02-13T20:44:05.082606Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 20:44:05.160855 waagent[1923]: 2025-02-13T20:44:05.160741Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 20:44:05.167319 waagent[1923]: 2025-02-13T20:44:05.167248Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 20:44:05.176579 waagent[1923]: 2025-02-13T20:44:05.176524Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:44:05.199119 waagent[1923]: 2025-02-13T20:44:05.199065Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 20:44:05.205233 waagent[1923]: 2025-02-13T20:44:05.205182Z INFO Daemon Feb 13 20:44:05.208154 waagent[1923]: 2025-02-13T20:44:05.208095Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b3011e9c-e9be-4d7c-a94d-30566b0ea10f eTag: 106137321853764390 source: Fabric] Feb 13 20:44:05.222257 waagent[1923]: 2025-02-13T20:44:05.222208Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 20:44:05.228916 waagent[1923]: 2025-02-13T20:44:05.228867Z INFO Daemon Feb 13 20:44:05.231620 waagent[1923]: 2025-02-13T20:44:05.231573Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:44:05.242428 waagent[1923]: 2025-02-13T20:44:05.242388Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 20:44:05.298705 login[1927]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:05.307074 systemd-logind[1768]: New session 1 of user core. Feb 13 20:44:05.313359 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:44:05.336830 waagent[1923]: 2025-02-13T20:44:05.336690Z INFO Daemon Downloaded certificate {'thumbprint': 'D8B0C5A237A7A11DDF6E2642B99245094CE8194E', 'hasPrivateKey': True} Feb 13 20:44:05.347618 waagent[1923]: 2025-02-13T20:44:05.347069Z INFO Daemon Downloaded certificate {'thumbprint': '1DACE0CD684C72107D37CDDC3AD8421CE9BFA741', 'hasPrivateKey': False} Feb 13 20:44:05.357301 waagent[1923]: 2025-02-13T20:44:05.357228Z INFO Daemon Fetch goal state completed Feb 13 20:44:05.369022 waagent[1923]: 2025-02-13T20:44:05.368923Z INFO Daemon Daemon Starting provisioning Feb 13 20:44:05.374082 waagent[1923]: 2025-02-13T20:44:05.373981Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 20:44:05.379392 waagent[1923]: 2025-02-13T20:44:05.379329Z INFO Daemon Daemon Set hostname [ci-4081.3.1-a-1c3e1e2868] Feb 13 20:44:05.392046 waagent[1923]: 2025-02-13T20:44:05.391728Z INFO Daemon Daemon Publish hostname [ci-4081.3.1-a-1c3e1e2868] Feb 13 20:44:05.399263 waagent[1923]: 2025-02-13T20:44:05.399187Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 20:44:05.405992 waagent[1923]: 2025-02-13T20:44:05.405924Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 20:44:05.430244 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:44:05.430854 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:44:05.430900 systemd-networkd[1375]: eth0: DHCP lease lost Feb 13 20:44:05.431394 waagent[1923]: 2025-02-13T20:44:05.431303Z INFO Daemon Daemon Create user account if not exists Feb 13 20:44:05.437191 waagent[1923]: 2025-02-13T20:44:05.436959Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 20:44:05.443001 waagent[1923]: 2025-02-13T20:44:05.442922Z INFO Daemon Daemon Configure sudoer Feb 13 20:44:05.443189 systemd-networkd[1375]: eth0: DHCPv6 lease lost Feb 13 20:44:05.447809 waagent[1923]: 2025-02-13T20:44:05.447730Z INFO Daemon Daemon Configure sshd Feb 13 20:44:05.452426 waagent[1923]: 2025-02-13T20:44:05.452354Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 20:44:05.465142 waagent[1923]: 2025-02-13T20:44:05.465052Z INFO Daemon Daemon Deploy ssh public key. Feb 13 20:44:05.474140 systemd-networkd[1375]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 20:44:06.553283 waagent[1923]: 2025-02-13T20:44:06.553233Z INFO Daemon Daemon Provisioning complete Feb 13 20:44:06.574579 waagent[1923]: 2025-02-13T20:44:06.574527Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 20:44:06.582116 waagent[1923]: 2025-02-13T20:44:06.582053Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 20:44:06.592619 waagent[1923]: 2025-02-13T20:44:06.592556Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 20:44:06.734643 waagent[1996]: 2025-02-13T20:44:06.734546Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 20:44:06.734971 waagent[1996]: 2025-02-13T20:44:06.734717Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.1 Feb 13 20:44:06.734971 waagent[1996]: 2025-02-13T20:44:06.734770Z INFO ExtHandler ExtHandler Python: 3.11.9 Feb 13 20:44:06.746343 waagent[1996]: 2025-02-13T20:44:06.746241Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 20:44:06.746550 waagent[1996]: 2025-02-13T20:44:06.746506Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:44:06.746611 waagent[1996]: 2025-02-13T20:44:06.746583Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:44:06.756349 waagent[1996]: 2025-02-13T20:44:06.756258Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 20:44:06.762986 waagent[1996]: 2025-02-13T20:44:06.762940Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 20:44:06.763556 waagent[1996]: 2025-02-13T20:44:06.763511Z INFO ExtHandler Feb 13 20:44:06.763631 waagent[1996]: 2025-02-13T20:44:06.763601Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8ae6b4c4-53ad-4e60-9d63-67fcac861557 eTag: 106137321853764390 source: Fabric] Feb 13 20:44:06.763946 waagent[1996]: 2025-02-13T20:44:06.763907Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 20:44:06.764572 waagent[1996]: 2025-02-13T20:44:06.764528Z INFO ExtHandler Feb 13 20:44:06.764638 waagent[1996]: 2025-02-13T20:44:06.764610Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 20:44:06.769195 waagent[1996]: 2025-02-13T20:44:06.769152Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 20:44:06.849818 waagent[1996]: 2025-02-13T20:44:06.849662Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D8B0C5A237A7A11DDF6E2642B99245094CE8194E', 'hasPrivateKey': True} Feb 13 20:44:06.850221 waagent[1996]: 2025-02-13T20:44:06.850173Z INFO ExtHandler Downloaded certificate {'thumbprint': '1DACE0CD684C72107D37CDDC3AD8421CE9BFA741', 'hasPrivateKey': False} Feb 13 20:44:06.850648 waagent[1996]: 2025-02-13T20:44:06.850605Z INFO ExtHandler Fetch goal state completed Feb 13 20:44:06.868119 waagent[1996]: 2025-02-13T20:44:06.868045Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1996 Feb 13 20:44:06.868295 waagent[1996]: 2025-02-13T20:44:06.868259Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 20:44:06.870026 waagent[1996]: 2025-02-13T20:44:06.869966Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 20:44:06.870433 waagent[1996]: 2025-02-13T20:44:06.870392Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 20:44:06.884255 waagent[1996]: 2025-02-13T20:44:06.884206Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 20:44:06.884471 waagent[1996]: 2025-02-13T20:44:06.884431Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 20:44:06.891250 waagent[1996]: 2025-02-13T20:44:06.890686Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 20:44:06.897948 systemd[1]: Reloading requested from client PID 2011 ('systemctl') (unit waagent.service)... Feb 13 20:44:06.897960 systemd[1]: Reloading... Feb 13 20:44:06.979054 zram_generator::config[2046]: No configuration found. Feb 13 20:44:07.101028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:44:07.175426 systemd[1]: Reloading finished in 277 ms. Feb 13 20:44:07.196332 waagent[1996]: 2025-02-13T20:44:07.196184Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 20:44:07.203030 systemd[1]: Reloading requested from client PID 2104 ('systemctl') (unit waagent.service)... Feb 13 20:44:07.203052 systemd[1]: Reloading... Feb 13 20:44:07.301054 zram_generator::config[2141]: No configuration found. Feb 13 20:44:07.405019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:44:07.479057 systemd[1]: Reloading finished in 275 ms. Feb 13 20:44:07.503365 waagent[1996]: 2025-02-13T20:44:07.502506Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 20:44:07.503365 waagent[1996]: 2025-02-13T20:44:07.502692Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 20:44:07.626081 waagent[1996]: 2025-02-13T20:44:07.625961Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 20:44:07.626687 waagent[1996]: 2025-02-13T20:44:07.626631Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 20:44:07.627537 waagent[1996]: 2025-02-13T20:44:07.627448Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 20:44:07.628021 waagent[1996]: 2025-02-13T20:44:07.627895Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 20:44:07.628526 waagent[1996]: 2025-02-13T20:44:07.628416Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 20:44:07.628628 waagent[1996]: 2025-02-13T20:44:07.628517Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 20:44:07.629167 waagent[1996]: 2025-02-13T20:44:07.629052Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 20:44:07.629244 waagent[1996]: 2025-02-13T20:44:07.629166Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 20:44:07.629459 waagent[1996]: 2025-02-13T20:44:07.629329Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:44:07.630824 waagent[1996]: 2025-02-13T20:44:07.630104Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 20:44:07.630824 waagent[1996]: 2025-02-13T20:44:07.630219Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:44:07.630824 waagent[1996]: 2025-02-13T20:44:07.630370Z INFO EnvHandler ExtHandler Configure routes Feb 13 20:44:07.630824 waagent[1996]: 2025-02-13T20:44:07.630440Z INFO EnvHandler ExtHandler Gateway:None Feb 13 20:44:07.630824 waagent[1996]: 2025-02-13T20:44:07.630482Z INFO EnvHandler ExtHandler Routes:None Feb 13 20:44:07.631397 waagent[1996]: 2025-02-13T20:44:07.631342Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 20:44:07.633034 waagent[1996]: 2025-02-13T20:44:07.631503Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 20:44:07.637942 waagent[1996]: 2025-02-13T20:44:07.637880Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 20:44:07.640335 waagent[1996]: 2025-02-13T20:44:07.640279Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 20:44:07.640335 waagent[1996]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 20:44:07.640335 waagent[1996]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 20:44:07.640335 waagent[1996]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 20:44:07.640335 waagent[1996]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:44:07.640335 waagent[1996]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:44:07.640335 waagent[1996]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 20:44:07.642853 waagent[1996]: 2025-02-13T20:44:07.642789Z INFO ExtHandler ExtHandler Feb 13 20:44:07.642977 waagent[1996]: 2025-02-13T20:44:07.642935Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 20637568-19d3-43b3-8ca4-b4ee0b40cbee correlation 1595282c-d7d2-427a-9864-d31c3866b542 created: 2025-02-13T20:43:23.804337Z] Feb 13 20:44:07.643892 waagent[1996]: 2025-02-13T20:44:07.643463Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 20:44:07.644246 waagent[1996]: 2025-02-13T20:44:07.644195Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 20:44:07.663452 waagent[1996]: 2025-02-13T20:44:07.663320Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 20:44:07.663452 waagent[1996]: Executing ['ip', '-a', '-o', 'link']: Feb 13 20:44:07.663452 waagent[1996]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 20:44:07.663452 waagent[1996]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:b6:78 brd ff:ff:ff:ff:ff:ff Feb 13 20:44:07.663452 waagent[1996]: 3: enP24869s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:b6:78 brd ff:ff:ff:ff:ff:ff\ altname enP24869p0s2 Feb 13 20:44:07.663452 waagent[1996]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 20:44:07.663452 waagent[1996]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 20:44:07.663452 waagent[1996]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 20:44:07.663452 waagent[1996]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 20:44:07.663452 waagent[1996]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 20:44:07.663452 waagent[1996]: 2: eth0 inet6 fe80::20d:3aff:fec2:b678/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:44:07.663452 waagent[1996]: 3: enP24869s1 inet6 fe80::20d:3aff:fec2:b678/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 20:44:07.682261 waagent[1996]: 2025-02-13T20:44:07.682185Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 66B4BD62-C1B4-4C6E-B7E9-1572BCFB242B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 20:44:07.694285 waagent[1996]: 2025-02-13T20:44:07.694219Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 20:44:07.694285 waagent[1996]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:44:07.694285 waagent[1996]: pkts bytes target prot opt in out source destination Feb 13 20:44:07.694285 waagent[1996]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:44:07.694285 waagent[1996]: pkts bytes target prot opt in out source destination Feb 13 20:44:07.694285 waagent[1996]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:44:07.694285 waagent[1996]: pkts bytes target prot opt in out source destination Feb 13 20:44:07.694285 waagent[1996]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:44:07.694285 waagent[1996]: 7 511 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:44:07.694285 waagent[1996]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:44:07.698132 waagent[1996]: 2025-02-13T20:44:07.698066Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 20:44:07.698132 waagent[1996]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:44:07.698132 waagent[1996]: pkts bytes target prot opt in out source destination Feb 13 20:44:07.698132 waagent[1996]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:44:07.698132 waagent[1996]: pkts bytes target prot opt in out source destination Feb 13 20:44:07.698132 waagent[1996]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 20:44:07.698132 waagent[1996]: pkts bytes target prot opt in out source destination Feb 13 20:44:07.698132 waagent[1996]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 20:44:07.698132 waagent[1996]: 10 1045 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 20:44:07.698132 waagent[1996]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 20:44:07.698625 waagent[1996]: 2025-02-13T20:44:07.698473Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 20:44:14.346496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:44:14.355208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:14.453266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:14.458806 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:14.581288 kubelet[2241]: E0213 20:44:14.581228 2241 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:14.583880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:14.584048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:24.604918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:44:24.610502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:24.870327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:24.871369 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:24.911152 kubelet[2262]: E0213 20:44:24.911093 2262 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:24.913309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:24.913448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:26.062401 chronyd[1751]: Selected source PHC0 Feb 13 20:44:30.804943 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:44:30.818352 systemd[1]: Started sshd@0-10.200.20.21:22-10.200.16.10:54134.service - OpenSSH per-connection server daemon (10.200.16.10:54134). Feb 13 20:44:31.273415 sshd[2271]: Accepted publickey for core from 10.200.16.10 port 54134 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:44:31.274704 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:31.278900 systemd-logind[1768]: New session 3 of user core. Feb 13 20:44:31.285397 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:44:31.677270 systemd[1]: Started sshd@1-10.200.20.21:22-10.200.16.10:54146.service - OpenSSH per-connection server daemon (10.200.16.10:54146). Feb 13 20:44:32.159488 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 54146 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:44:32.160854 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:32.164747 systemd-logind[1768]: New session 4 of user core. Feb 13 20:44:32.175267 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:44:32.523238 sshd[2276]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:32.527568 systemd-logind[1768]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:44:32.528134 systemd[1]: sshd@1-10.200.20.21:22-10.200.16.10:54146.service: Deactivated successfully. Feb 13 20:44:32.530329 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:44:32.531552 systemd-logind[1768]: Removed session 4. Feb 13 20:44:32.603374 systemd[1]: Started sshd@2-10.200.20.21:22-10.200.16.10:54152.service - OpenSSH per-connection server daemon (10.200.16.10:54152). Feb 13 20:44:33.042817 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 54152 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:44:33.044228 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:33.048321 systemd-logind[1768]: New session 5 of user core. Feb 13 20:44:33.055357 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:44:33.364542 sshd[2284]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:33.367591 systemd-logind[1768]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:44:33.368695 systemd[1]: sshd@2-10.200.20.21:22-10.200.16.10:54152.service: Deactivated successfully. Feb 13 20:44:33.372451 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:44:33.373572 systemd-logind[1768]: Removed session 5. Feb 13 20:44:33.447278 systemd[1]: Started sshd@3-10.200.20.21:22-10.200.16.10:54158.service - OpenSSH per-connection server daemon (10.200.16.10:54158). Feb 13 20:44:33.890881 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 54158 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:44:33.892250 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:33.897120 systemd-logind[1768]: New session 6 of user core. Feb 13 20:44:33.903340 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:44:34.216498 sshd[2292]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:34.220123 systemd[1]: sshd@3-10.200.20.21:22-10.200.16.10:54158.service: Deactivated successfully. Feb 13 20:44:34.223684 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:44:34.224465 systemd-logind[1768]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:44:34.225344 systemd-logind[1768]: Removed session 6. Feb 13 20:44:34.304429 systemd[1]: Started sshd@4-10.200.20.21:22-10.200.16.10:54162.service - OpenSSH per-connection server daemon (10.200.16.10:54162). Feb 13 20:44:34.793461 sshd[2300]: Accepted publickey for core from 10.200.16.10 port 54162 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:44:34.794800 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:34.798966 systemd-logind[1768]: New session 7 of user core. Feb 13 20:44:34.812269 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:44:35.101203 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:44:35.101475 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:35.102837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:44:35.112421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:35.114881 sudo[2304]: pam_unix(sudo:session): session closed for user root Feb 13 20:44:35.190461 sshd[2300]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:35.193664 systemd-logind[1768]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:44:35.194152 systemd[1]: sshd@4-10.200.20.21:22-10.200.16.10:54162.service: Deactivated successfully. Feb 13 20:44:35.197219 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:44:35.198814 systemd-logind[1768]: Removed session 7. Feb 13 20:44:35.270280 systemd[1]: Started sshd@5-10.200.20.21:22-10.200.16.10:54168.service - OpenSSH per-connection server daemon (10.200.16.10:54168). Feb 13 20:44:35.411247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:35.414522 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:35.456956 kubelet[2323]: E0213 20:44:35.456875 2323 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:35.459734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:35.459946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:35.714926 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 54168 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:44:35.716145 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:35.720589 systemd-logind[1768]: New session 8 of user core. Feb 13 20:44:35.727279 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:44:35.968531 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:44:35.969274 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:35.972799 sudo[2335]: pam_unix(sudo:session): session closed for user root Feb 13 20:44:35.977621 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:44:35.977873 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:35.996257 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:44:35.998115 auditctl[2338]: No rules Feb 13 20:44:35.998583 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:44:35.998824 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:44:36.002486 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:44:36.039324 augenrules[2357]: No rules Feb 13 20:44:36.041219 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:44:36.042239 sudo[2334]: pam_unix(sudo:session): session closed for user root Feb 13 20:44:36.117217 sshd[2313]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:36.119647 systemd[1]: sshd@5-10.200.20.21:22-10.200.16.10:54168.service: Deactivated successfully. Feb 13 20:44:36.123467 systemd-logind[1768]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:44:36.124166 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:44:36.124971 systemd-logind[1768]: Removed session 8. Feb 13 20:44:36.204282 systemd[1]: Started sshd@6-10.200.20.21:22-10.200.16.10:54178.service - OpenSSH per-connection server daemon (10.200.16.10:54178). Feb 13 20:44:36.688544 sshd[2366]: Accepted publickey for core from 10.200.16.10 port 54178 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:44:36.689845 sshd[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:36.694213 systemd-logind[1768]: New session 9 of user core. Feb 13 20:44:36.703271 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:44:36.962666 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:44:36.963427 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:44:37.305503 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:44:37.305508 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:44:37.558620 dockerd[2385]: time="2025-02-13T20:44:37.558493146Z" level=info msg="Starting up" Feb 13 20:44:37.699949 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2915038703-merged.mount: Deactivated successfully. Feb 13 20:44:37.788926 dockerd[2385]: time="2025-02-13T20:44:37.788860248Z" level=info msg="Loading containers: start." Feb 13 20:44:37.911034 kernel: Initializing XFRM netlink socket Feb 13 20:44:37.976913 systemd-networkd[1375]: docker0: Link UP Feb 13 20:44:38.000301 dockerd[2385]: time="2025-02-13T20:44:38.000263129Z" level=info msg="Loading containers: done." Feb 13 20:44:38.023945 dockerd[2385]: time="2025-02-13T20:44:38.023790396Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:44:38.024347 dockerd[2385]: time="2025-02-13T20:44:38.023994716Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:44:38.024347 dockerd[2385]: time="2025-02-13T20:44:38.024312317Z" level=info msg="Daemon has completed initialization" Feb 13 20:44:38.084060 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:44:38.084848 dockerd[2385]: time="2025-02-13T20:44:38.084779025Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:44:40.339499 containerd[1814]: time="2025-02-13T20:44:40.339447754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:44:42.222980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888449546.mount: Deactivated successfully. Feb 13 20:44:45.604794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:44:45.613191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:45.706211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:45.710935 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:45.751154 kubelet[2560]: E0213 20:44:45.751078 2560 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:45.754285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:45.754443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:48.109217 update_engine[1776]: I20250213 20:44:48.109151 1776 update_attempter.cc:509] Updating boot flags... Feb 13 20:44:49.700568 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 20:44:49.741100 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2581) Feb 13 20:44:49.835179 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2582) Feb 13 20:44:52.257121 containerd[1814]: time="2025-02-13T20:44:52.256272698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:52.300674 containerd[1814]: time="2025-02-13T20:44:52.300615632Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 20:44:52.305100 containerd[1814]: time="2025-02-13T20:44:52.305043037Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:52.364185 containerd[1814]: time="2025-02-13T20:44:52.363863429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:52.365297 containerd[1814]: time="2025-02-13T20:44:52.365100951Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 12.025607717s" Feb 13 20:44:52.365297 containerd[1814]: time="2025-02-13T20:44:52.365141151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:44:52.383874 containerd[1814]: time="2025-02-13T20:44:52.383835094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:44:55.808052 containerd[1814]: time="2025-02-13T20:44:55.807729230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:55.853991 containerd[1814]: time="2025-02-13T20:44:55.853950687Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 20:44:55.854835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:44:55.860217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:44:55.914050 containerd[1814]: time="2025-02-13T20:44:55.913248839Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:55.955399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:44:55.957854 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:44:55.994759 kubelet[2688]: E0213 20:44:55.994714 2688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:44:55.996976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:44:55.997134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:44:58.463245 containerd[1814]: time="2025-02-13T20:44:58.463175477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:58.464676 containerd[1814]: time="2025-02-13T20:44:58.464512359Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 6.080633544s" Feb 13 20:44:58.464676 containerd[1814]: time="2025-02-13T20:44:58.464552759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:44:58.488020 containerd[1814]: time="2025-02-13T20:44:58.487978868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:44:59.685081 containerd[1814]: time="2025-02-13T20:44:59.684045552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:59.686617 containerd[1814]: time="2025-02-13T20:44:59.686376035Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 20:44:59.695046 containerd[1814]: time="2025-02-13T20:44:59.694484965Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:59.700975 containerd[1814]: time="2025-02-13T20:44:59.700915693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:44:59.702088 containerd[1814]: time="2025-02-13T20:44:59.701929654Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.213892466s" Feb 13 20:44:59.702088 containerd[1814]: time="2025-02-13T20:44:59.701969174Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:44:59.721666 containerd[1814]: time="2025-02-13T20:44:59.721466078Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:45:00.806608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901365986.mount: Deactivated successfully. Feb 13 20:45:01.420049 containerd[1814]: time="2025-02-13T20:45:01.419843186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:01.423900 containerd[1814]: time="2025-02-13T20:45:01.423679471Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 20:45:01.426612 containerd[1814]: time="2025-02-13T20:45:01.426570074Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:01.431273 containerd[1814]: time="2025-02-13T20:45:01.431222800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:01.432054 containerd[1814]: time="2025-02-13T20:45:01.431739321Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.710234683s" Feb 13 20:45:01.432054 containerd[1814]: time="2025-02-13T20:45:01.431773001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:45:01.453035 containerd[1814]: time="2025-02-13T20:45:01.452885467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:45:02.161196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165351676.mount: Deactivated successfully. Feb 13 20:45:06.104716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 20:45:06.114285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:16.001230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:16.011427 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:45:16.050134 kubelet[2779]: E0213 20:45:16.050064 2779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:45:16.053210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:45:16.053370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:45:18.979049 containerd[1814]: time="2025-02-13T20:45:18.978974746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:18.982977 containerd[1814]: time="2025-02-13T20:45:18.982721992Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 20:45:18.986639 containerd[1814]: time="2025-02-13T20:45:18.986591557Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:18.995038 containerd[1814]: time="2025-02-13T20:45:18.993230847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:18.996660 containerd[1814]: time="2025-02-13T20:45:18.996590732Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 17.543600985s" Feb 13 20:45:18.996660 containerd[1814]: time="2025-02-13T20:45:18.996651332Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:45:19.017141 containerd[1814]: time="2025-02-13T20:45:19.017106202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:45:19.670198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751786129.mount: Deactivated successfully. Feb 13 20:45:19.699056 containerd[1814]: time="2025-02-13T20:45:19.698977647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:19.702103 containerd[1814]: time="2025-02-13T20:45:19.702049332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 20:45:19.707373 containerd[1814]: time="2025-02-13T20:45:19.707320459Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:19.712370 containerd[1814]: time="2025-02-13T20:45:19.712315547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:19.713203 containerd[1814]: time="2025-02-13T20:45:19.713072268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 695.780586ms" Feb 13 20:45:19.713203 containerd[1814]: time="2025-02-13T20:45:19.713108108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:45:19.732455 containerd[1814]: time="2025-02-13T20:45:19.732339496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:45:20.492266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372925151.mount: Deactivated successfully. Feb 13 20:45:22.824999 containerd[1814]: time="2025-02-13T20:45:22.824948439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:22.828693 containerd[1814]: time="2025-02-13T20:45:22.828652564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 20:45:22.839994 containerd[1814]: time="2025-02-13T20:45:22.839959739Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:22.849715 containerd[1814]: time="2025-02-13T20:45:22.849661672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:22.850591 containerd[1814]: time="2025-02-13T20:45:22.850449313Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.118073577s" Feb 13 20:45:22.850591 containerd[1814]: time="2025-02-13T20:45:22.850483393Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:45:26.104754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 20:45:26.110370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:26.399246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:26.408520 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:45:26.454022 kubelet[2917]: E0213 20:45:26.452955 2917 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:45:26.457638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:45:26.457836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:45:28.457712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:28.469279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:28.499390 systemd[1]: Reloading requested from client PID 2933 ('systemctl') (unit session-9.scope)... Feb 13 20:45:28.499415 systemd[1]: Reloading... Feb 13 20:45:28.619099 zram_generator::config[2975]: No configuration found. Feb 13 20:45:28.728508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:45:28.800505 systemd[1]: Reloading finished in 300 ms. Feb 13 20:45:28.850265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:28.853867 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:28.857279 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:45:28.857557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:28.863271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:28.957925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:28.963331 (kubelet)[3055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:45:29.000986 kubelet[3055]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:45:29.000986 kubelet[3055]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:45:29.000986 kubelet[3055]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:45:29.001427 kubelet[3055]: I0213 20:45:29.000949 3055 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:45:29.694859 kubelet[3055]: I0213 20:45:29.694821 3055 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:45:29.694859 kubelet[3055]: I0213 20:45:29.694852 3055 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:45:29.695091 kubelet[3055]: I0213 20:45:29.695072 3055 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:45:29.709549 kubelet[3055]: I0213 20:45:29.709405 3055 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:45:29.709995 kubelet[3055]: E0213 20:45:29.709971 3055 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.717339 kubelet[3055]: I0213 20:45:29.717260 3055 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:45:29.717652 kubelet[3055]: I0213 20:45:29.717620 3055 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:45:29.717833 kubelet[3055]: I0213 20:45:29.717652 3055 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-1c3e1e2868","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:45:29.717908 kubelet[3055]: I0213 20:45:29.717844 3055 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:45:29.717908 kubelet[3055]: I0213 20:45:29.717853 3055 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:45:29.717992 kubelet[3055]: I0213 20:45:29.717973 3055 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:45:29.718796 kubelet[3055]: I0213 20:45:29.718772 3055 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:45:29.718836 kubelet[3055]: I0213 20:45:29.718803 3055 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:45:29.718836 kubelet[3055]: I0213 20:45:29.718836 3055 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:45:29.718877 kubelet[3055]: I0213 20:45:29.718853 3055 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:45:29.720671 kubelet[3055]: W0213 20:45:29.720319 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.720671 kubelet[3055]: E0213 20:45:29.720369 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.720671 kubelet[3055]: W0213 20:45:29.720602 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-1c3e1e2868&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.720671 kubelet[3055]: E0213 20:45:29.720634 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-1c3e1e2868&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.721105 kubelet[3055]: I0213 20:45:29.721000 3055 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:45:29.721208 kubelet[3055]: I0213 20:45:29.721192 3055 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:45:29.721245 kubelet[3055]: W0213 20:45:29.721240 3055 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:45:29.722070 kubelet[3055]: I0213 20:45:29.722045 3055 server.go:1264] "Started kubelet" Feb 13 20:45:29.725573 kubelet[3055]: E0213 20:45:29.725447 3055 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-1c3e1e2868.1823df6a8724a12a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-1c3e1e2868,UID:ci-4081.3.1-a-1c3e1e2868,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-1c3e1e2868,},FirstTimestamp:2025-02-13 20:45:29.721995562 +0000 UTC m=+0.755339329,LastTimestamp:2025-02-13 20:45:29.721995562 +0000 UTC m=+0.755339329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-1c3e1e2868,}" Feb 13 20:45:29.727330 kubelet[3055]: I0213 20:45:29.727312 3055 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:45:29.729769 kubelet[3055]: I0213 20:45:29.729405 3055 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:45:29.730693 kubelet[3055]: I0213 20:45:29.730274 3055 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:45:29.731136 kubelet[3055]: I0213 20:45:29.731078 3055 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:45:29.731315 kubelet[3055]: I0213 20:45:29.731294 3055 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:45:29.731737 kubelet[3055]: I0213 20:45:29.731709 3055 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:45:29.732420 kubelet[3055]: I0213 20:45:29.731851 3055 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:45:29.732815 kubelet[3055]: I0213 20:45:29.732785 3055 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:45:29.733320 kubelet[3055]: W0213 20:45:29.733265 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.733320 kubelet[3055]: E0213 20:45:29.733320 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.733474 kubelet[3055]: E0213 20:45:29.733450 3055 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:45:29.734583 kubelet[3055]: E0213 20:45:29.734535 3055 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-1c3e1e2868?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="200ms" Feb 13 20:45:29.735986 kubelet[3055]: I0213 20:45:29.735489 3055 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:45:29.739048 kubelet[3055]: I0213 20:45:29.737835 3055 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:45:29.739048 kubelet[3055]: I0213 20:45:29.737860 3055 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:45:29.753482 kubelet[3055]: I0213 20:45:29.753435 3055 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:45:29.754660 kubelet[3055]: I0213 20:45:29.754638 3055 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:45:29.754783 kubelet[3055]: I0213 20:45:29.754773 3055 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:45:29.754840 kubelet[3055]: I0213 20:45:29.754832 3055 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:45:29.754928 kubelet[3055]: E0213 20:45:29.754910 3055 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:45:29.761994 kubelet[3055]: W0213 20:45:29.761940 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.762176 kubelet[3055]: E0213 20:45:29.762162 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:29.801780 kubelet[3055]: I0213 20:45:29.801745 3055 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:45:29.801780 kubelet[3055]: I0213 20:45:29.801767 3055 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:45:29.801926 kubelet[3055]: I0213 20:45:29.801791 3055 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:45:29.807218 kubelet[3055]: I0213 20:45:29.807191 3055 policy_none.go:49] "None policy: Start" Feb 13 20:45:29.807875 kubelet[3055]: I0213 20:45:29.807825 3055 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:45:29.807875 kubelet[3055]: I0213 20:45:29.807855 3055 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:45:29.817618 kubelet[3055]: I0213 20:45:29.817581 3055 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:45:29.817822 kubelet[3055]: I0213 20:45:29.817779 3055 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:45:29.817897 kubelet[3055]: I0213 20:45:29.817883 3055 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:45:29.821858 kubelet[3055]: E0213 20:45:29.821831 3055 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-1c3e1e2868\" not found" Feb 13 20:45:29.833956 kubelet[3055]: I0213 20:45:29.833927 3055 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.834344 kubelet[3055]: E0213 20:45:29.834313 3055 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.855621 kubelet[3055]: I0213 20:45:29.855529 3055 topology_manager.go:215] "Topology Admit Handler" podUID="5c014f883b379dbb9d5c9cb60d8e5c8b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.857159 kubelet[3055]: I0213 20:45:29.857124 3055 topology_manager.go:215] "Topology Admit Handler" podUID="7e0af46ebe810f3cb8cc616d6875752a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.858818 kubelet[3055]: I0213 20:45:29.858707 3055 topology_manager.go:215] "Topology Admit Handler" podUID="37a9abfe70e92d5127bf8554437bcbfb" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933416 kubelet[3055]: I0213 20:45:29.933186 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c014f883b379dbb9d5c9cb60d8e5c8b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-1c3e1e2868\" (UID: \"5c014f883b379dbb9d5c9cb60d8e5c8b\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933416 kubelet[3055]: I0213 20:45:29.933224 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c014f883b379dbb9d5c9cb60d8e5c8b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-1c3e1e2868\" (UID: \"5c014f883b379dbb9d5c9cb60d8e5c8b\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933416 kubelet[3055]: I0213 20:45:29.933244 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933416 kubelet[3055]: I0213 20:45:29.933260 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933416 kubelet[3055]: I0213 20:45:29.933277 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933627 kubelet[3055]: I0213 20:45:29.933293 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c014f883b379dbb9d5c9cb60d8e5c8b-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-1c3e1e2868\" (UID: \"5c014f883b379dbb9d5c9cb60d8e5c8b\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933627 kubelet[3055]: I0213 20:45:29.933310 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933627 kubelet[3055]: I0213 20:45:29.933325 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.933627 kubelet[3055]: I0213 20:45:29.933339 3055 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37a9abfe70e92d5127bf8554437bcbfb-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-1c3e1e2868\" (UID: \"37a9abfe70e92d5127bf8554437bcbfb\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:29.935210 kubelet[3055]: E0213 20:45:29.935175 3055 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-1c3e1e2868?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="400ms" Feb 13 20:45:30.035965 kubelet[3055]: I0213 20:45:30.035855 3055 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:30.036812 kubelet[3055]: E0213 20:45:30.036779 3055 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:30.163039 containerd[1814]: time="2025-02-13T20:45:30.162953671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-1c3e1e2868,Uid:5c014f883b379dbb9d5c9cb60d8e5c8b,Namespace:kube-system,Attempt:0,}" Feb 13 20:45:30.165536 containerd[1814]: time="2025-02-13T20:45:30.165502874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-1c3e1e2868,Uid:7e0af46ebe810f3cb8cc616d6875752a,Namespace:kube-system,Attempt:0,}" Feb 13 20:45:30.166098 containerd[1814]: time="2025-02-13T20:45:30.166064835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-1c3e1e2868,Uid:37a9abfe70e92d5127bf8554437bcbfb,Namespace:kube-system,Attempt:0,}" Feb 13 20:45:30.336495 kubelet[3055]: E0213 20:45:30.336373 3055 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-1c3e1e2868?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="800ms" Feb 13 20:45:30.439283 kubelet[3055]: I0213 20:45:30.439253 3055 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:30.439638 kubelet[3055]: E0213 20:45:30.439610 3055 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:30.848674 kubelet[3055]: W0213 20:45:30.848602 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:30.848674 kubelet[3055]: E0213 20:45:30.848677 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:30.887234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238926675.mount: Deactivated successfully. Feb 13 20:45:30.935899 containerd[1814]: time="2025-02-13T20:45:30.935108858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:30.938799 containerd[1814]: time="2025-02-13T20:45:30.938716743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 20:45:30.944053 containerd[1814]: time="2025-02-13T20:45:30.943616350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:30.948913 containerd[1814]: time="2025-02-13T20:45:30.948179476Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:30.950962 containerd[1814]: time="2025-02-13T20:45:30.950924919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:45:30.954478 kubelet[3055]: W0213 20:45:30.954427 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:30.954549 kubelet[3055]: E0213 20:45:30.954487 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:30.958731 containerd[1814]: time="2025-02-13T20:45:30.957672208Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:30.961525 containerd[1814]: time="2025-02-13T20:45:30.961472813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:45:30.967563 containerd[1814]: time="2025-02-13T20:45:30.967507141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:45:30.968906 containerd[1814]: time="2025-02-13T20:45:30.968338783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 802.211548ms" Feb 13 20:45:30.973220 containerd[1814]: time="2025-02-13T20:45:30.973173069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 807.599475ms" Feb 13 20:45:30.983612 containerd[1814]: time="2025-02-13T20:45:30.983565083Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 820.507372ms" Feb 13 20:45:31.043770 kubelet[3055]: W0213 20:45:31.043441 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-1c3e1e2868&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:31.043770 kubelet[3055]: E0213 20:45:31.043507 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-1c3e1e2868&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:31.137888 kubelet[3055]: E0213 20:45:31.137842 3055 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-1c3e1e2868?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="1.6s" Feb 13 20:45:31.162624 kubelet[3055]: W0213 20:45:31.162530 3055 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:31.162624 kubelet[3055]: E0213 20:45:31.162582 3055 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Feb 13 20:45:31.175241 containerd[1814]: time="2025-02-13T20:45:31.174669817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:31.175241 containerd[1814]: time="2025-02-13T20:45:31.174733537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:31.175241 containerd[1814]: time="2025-02-13T20:45:31.174748097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:31.175241 containerd[1814]: time="2025-02-13T20:45:31.175061298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:31.181065 containerd[1814]: time="2025-02-13T20:45:31.180715265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:31.181065 containerd[1814]: time="2025-02-13T20:45:31.180820105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:31.181065 containerd[1814]: time="2025-02-13T20:45:31.180867225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:31.181065 containerd[1814]: time="2025-02-13T20:45:31.180878145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:31.181065 containerd[1814]: time="2025-02-13T20:45:31.180962946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:31.181276 containerd[1814]: time="2025-02-13T20:45:31.181036026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:31.183439 containerd[1814]: time="2025-02-13T20:45:31.181670307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:31.183439 containerd[1814]: time="2025-02-13T20:45:31.182110307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:31.242893 kubelet[3055]: I0213 20:45:31.242832 3055 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:31.243265 containerd[1814]: time="2025-02-13T20:45:31.243214548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-1c3e1e2868,Uid:7e0af46ebe810f3cb8cc616d6875752a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f647a146213bf3f04d2217cc706c3f009451d3d27f535fbae865f4ba9e1342e\"" Feb 13 20:45:31.243573 kubelet[3055]: E0213 20:45:31.243531 3055 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:31.251134 containerd[1814]: time="2025-02-13T20:45:31.251086679Z" level=info msg="CreateContainer within sandbox \"0f647a146213bf3f04d2217cc706c3f009451d3d27f535fbae865f4ba9e1342e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:45:31.253531 containerd[1814]: time="2025-02-13T20:45:31.253474322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-1c3e1e2868,Uid:5c014f883b379dbb9d5c9cb60d8e5c8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fc013c57becb872450adfe38d902dd3367f049e2f2b1a91cbd2596b71a1c713\"" Feb 13 20:45:31.257655 containerd[1814]: time="2025-02-13T20:45:31.257610728Z" level=info msg="CreateContainer within sandbox \"2fc013c57becb872450adfe38d902dd3367f049e2f2b1a91cbd2596b71a1c713\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:45:31.263114 containerd[1814]: time="2025-02-13T20:45:31.263066735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-1c3e1e2868,Uid:37a9abfe70e92d5127bf8554437bcbfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cc572589490c43b0d255299ca8dd62dcd75ce431d8a48bff78302b1b6e32e5d\"" Feb 13 20:45:31.268465 containerd[1814]: time="2025-02-13T20:45:31.268426662Z" level=info msg="CreateContainer within sandbox \"4cc572589490c43b0d255299ca8dd62dcd75ce431d8a48bff78302b1b6e32e5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:45:31.317471 containerd[1814]: time="2025-02-13T20:45:31.317395807Z" level=info msg="CreateContainer within sandbox \"0f647a146213bf3f04d2217cc706c3f009451d3d27f535fbae865f4ba9e1342e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8decf083c1ca8cbfc4872aa0b5f1f71650210a91f64c9f49f47cb4cd330b7b90\"" Feb 13 20:45:31.318399 containerd[1814]: time="2025-02-13T20:45:31.318069128Z" level=info msg="StartContainer for \"8decf083c1ca8cbfc4872aa0b5f1f71650210a91f64c9f49f47cb4cd330b7b90\"" Feb 13 20:45:31.339606 containerd[1814]: time="2025-02-13T20:45:31.339339996Z" level=info msg="CreateContainer within sandbox \"2fc013c57becb872450adfe38d902dd3367f049e2f2b1a91cbd2596b71a1c713\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"560086fa21302fe9379fef4d3ebfcad30c9f9356b9e2a1ce9f159e165c8829c9\"" Feb 13 20:45:31.340821 containerd[1814]: time="2025-02-13T20:45:31.340645158Z" level=info msg="StartContainer for \"560086fa21302fe9379fef4d3ebfcad30c9f9356b9e2a1ce9f159e165c8829c9\"" Feb 13 20:45:31.344309 containerd[1814]: time="2025-02-13T20:45:31.344246803Z" level=info msg="CreateContainer within sandbox \"4cc572589490c43b0d255299ca8dd62dcd75ce431d8a48bff78302b1b6e32e5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ba1e66d1132e8e0b19e5d6cbf2813358b96d58443b73069bec7f68bec60e602a\"" Feb 13 20:45:31.345566 containerd[1814]: time="2025-02-13T20:45:31.345523965Z" level=info msg="StartContainer for \"ba1e66d1132e8e0b19e5d6cbf2813358b96d58443b73069bec7f68bec60e602a\"" Feb 13 20:45:31.399533 containerd[1814]: time="2025-02-13T20:45:31.399385676Z" level=info msg="StartContainer for \"8decf083c1ca8cbfc4872aa0b5f1f71650210a91f64c9f49f47cb4cd330b7b90\" returns successfully" Feb 13 20:45:31.438279 containerd[1814]: time="2025-02-13T20:45:31.437615967Z" level=info msg="StartContainer for \"ba1e66d1132e8e0b19e5d6cbf2813358b96d58443b73069bec7f68bec60e602a\" returns successfully" Feb 13 20:45:31.471744 containerd[1814]: time="2025-02-13T20:45:31.471694613Z" level=info msg="StartContainer for \"560086fa21302fe9379fef4d3ebfcad30c9f9356b9e2a1ce9f159e165c8829c9\" returns successfully" Feb 13 20:45:32.847066 kubelet[3055]: I0213 20:45:32.846775 3055 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:33.722272 kubelet[3055]: I0213 20:45:33.721702 3055 apiserver.go:52] "Watching apiserver" Feb 13 20:45:33.818942 kubelet[3055]: I0213 20:45:33.816636 3055 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:33.832084 kubelet[3055]: I0213 20:45:33.831998 3055 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:45:33.913994 kubelet[3055]: E0213 20:45:33.913675 3055 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 20:45:33.916082 kubelet[3055]: E0213 20:45:33.915401 3055 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.1-a-1c3e1e2868.1823df6a8724a12a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-1c3e1e2868,UID:ci-4081.3.1-a-1c3e1e2868,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-1c3e1e2868,},FirstTimestamp:2025-02-13 20:45:29.721995562 +0000 UTC m=+0.755339329,LastTimestamp:2025-02-13 20:45:29.721995562 +0000 UTC m=+0.755339329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-1c3e1e2868,}" Feb 13 20:45:34.004265 kubelet[3055]: E0213 20:45:34.003242 3055 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.1-a-1c3e1e2868.1823df6a87d34941 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-1c3e1e2868,UID:ci-4081.3.1-a-1c3e1e2868,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-1c3e1e2868,},FirstTimestamp:2025-02-13 20:45:29.733441857 +0000 UTC m=+0.766785624,LastTimestamp:2025-02-13 20:45:29.733441857 +0000 UTC m=+0.766785624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-1c3e1e2868,}" Feb 13 20:45:35.905226 systemd[1]: Reloading requested from client PID 3327 ('systemctl') (unit session-9.scope)... Feb 13 20:45:35.905594 systemd[1]: Reloading... Feb 13 20:45:36.010139 zram_generator::config[3376]: No configuration found. Feb 13 20:45:36.107372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:45:36.186258 systemd[1]: Reloading finished in 280 ms. Feb 13 20:45:36.218244 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:36.233084 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:45:36.233413 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:36.240426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:45:36.334219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:45:36.340199 (kubelet)[3441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:45:36.390576 kubelet[3441]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:45:36.390576 kubelet[3441]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:45:36.390576 kubelet[3441]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:45:36.390576 kubelet[3441]: I0213 20:45:36.390551 3441 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:45:36.395540 kubelet[3441]: I0213 20:45:36.395503 3441 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:45:36.395540 kubelet[3441]: I0213 20:45:36.395531 3441 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:45:36.395740 kubelet[3441]: I0213 20:45:36.395721 3441 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:45:36.397159 kubelet[3441]: I0213 20:45:36.397135 3441 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:45:36.398273 kubelet[3441]: I0213 20:45:36.398254 3441 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:45:36.407265 kubelet[3441]: I0213 20:45:36.407236 3441 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:45:36.407721 kubelet[3441]: I0213 20:45:36.407684 3441 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:45:36.407917 kubelet[3441]: I0213 20:45:36.407720 3441 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-1c3e1e2868","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:45:36.407990 kubelet[3441]: I0213 20:45:36.407925 3441 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:45:36.407990 kubelet[3441]: I0213 20:45:36.407934 3441 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:45:36.407990 kubelet[3441]: I0213 20:45:36.407964 3441 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:45:36.408109 kubelet[3441]: I0213 20:45:36.408092 3441 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:45:36.408153 kubelet[3441]: I0213 20:45:36.408122 3441 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:45:36.408153 kubelet[3441]: I0213 20:45:36.408152 3441 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:45:36.408197 kubelet[3441]: I0213 20:45:36.408165 3441 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:45:36.410198 kubelet[3441]: I0213 20:45:36.410133 3441 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:45:36.410526 kubelet[3441]: I0213 20:45:36.410502 3441 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:45:36.411159 kubelet[3441]: I0213 20:45:36.411107 3441 server.go:1264] "Started kubelet" Feb 13 20:45:36.413632 kubelet[3441]: I0213 20:45:36.413604 3441 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:45:36.418227 kubelet[3441]: I0213 20:45:36.418194 3441 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:45:36.419263 kubelet[3441]: I0213 20:45:36.419214 3441 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:45:36.420873 kubelet[3441]: I0213 20:45:36.420183 3441 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:45:36.420873 kubelet[3441]: I0213 20:45:36.420368 3441 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:45:36.421905 kubelet[3441]: I0213 20:45:36.421888 3441 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:45:36.423868 kubelet[3441]: I0213 20:45:36.423851 3441 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:45:36.424122 kubelet[3441]: I0213 20:45:36.424111 3441 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:45:36.430787 kubelet[3441]: I0213 20:45:36.430594 3441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:45:36.435217 kubelet[3441]: I0213 20:45:36.435185 3441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:45:36.435376 kubelet[3441]: I0213 20:45:36.435366 3441 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:45:36.435445 kubelet[3441]: I0213 20:45:36.435437 3441 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:45:36.435549 kubelet[3441]: E0213 20:45:36.435530 3441 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:45:36.440296 kubelet[3441]: I0213 20:45:36.440193 3441 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:45:36.443078 kubelet[3441]: I0213 20:45:36.442567 3441 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:45:36.472054 kubelet[3441]: I0213 20:45:36.471349 3441 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:45:36.526491 kubelet[3441]: I0213 20:45:36.526452 3441 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.715979 kubelet[3441]: E0213 20:45:36.535629 3441 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.538135 3441 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.538153 3441 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.538176 3441 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.552231 3441 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.714632 3441 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.714655 3441 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.714676 3441 policy_none.go:49] "None policy: Start" Feb 13 20:45:36.715979 kubelet[3441]: I0213 20:45:36.715500 3441 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.717611 kubelet[3441]: I0213 20:45:36.717587 3441 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:45:36.717686 kubelet[3441]: I0213 20:45:36.717619 3441 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:45:36.717837 kubelet[3441]: I0213 20:45:36.717817 3441 state_mem.go:75] "Updated machine memory state" Feb 13 20:45:36.720383 kubelet[3441]: I0213 20:45:36.720336 3441 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:45:36.722306 kubelet[3441]: I0213 20:45:36.721783 3441 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:45:36.722306 kubelet[3441]: I0213 20:45:36.721905 3441 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:45:36.735902 kubelet[3441]: I0213 20:45:36.735865 3441 topology_manager.go:215] "Topology Admit Handler" podUID="5c014f883b379dbb9d5c9cb60d8e5c8b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.736608 kubelet[3441]: I0213 20:45:36.736180 3441 topology_manager.go:215] "Topology Admit Handler" podUID="7e0af46ebe810f3cb8cc616d6875752a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.736608 kubelet[3441]: I0213 20:45:36.736238 3441 topology_manager.go:215] "Topology Admit Handler" podUID="37a9abfe70e92d5127bf8554437bcbfb" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.751120 kubelet[3441]: W0213 20:45:36.751046 3441 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:36.769592 kubelet[3441]: W0213 20:45:36.769384 3441 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:36.769592 kubelet[3441]: W0213 20:45:36.769443 3441 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:36.826980 kubelet[3441]: I0213 20:45:36.826929 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c014f883b379dbb9d5c9cb60d8e5c8b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-1c3e1e2868\" (UID: \"5c014f883b379dbb9d5c9cb60d8e5c8b\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.826980 kubelet[3441]: I0213 20:45:36.826970 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.827241 kubelet[3441]: I0213 20:45:36.826992 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.827241 kubelet[3441]: I0213 20:45:36.827030 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.827241 kubelet[3441]: I0213 20:45:36.827056 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.827241 kubelet[3441]: I0213 20:45:36.827072 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37a9abfe70e92d5127bf8554437bcbfb-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-1c3e1e2868\" (UID: \"37a9abfe70e92d5127bf8554437bcbfb\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.827241 kubelet[3441]: I0213 20:45:36.827090 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c014f883b379dbb9d5c9cb60d8e5c8b-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-1c3e1e2868\" (UID: \"5c014f883b379dbb9d5c9cb60d8e5c8b\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.827350 kubelet[3441]: I0213 20:45:36.827105 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c014f883b379dbb9d5c9cb60d8e5c8b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-1c3e1e2868\" (UID: \"5c014f883b379dbb9d5c9cb60d8e5c8b\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:36.827350 kubelet[3441]: I0213 20:45:36.827121 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e0af46ebe810f3cb8cc616d6875752a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-1c3e1e2868\" (UID: \"7e0af46ebe810f3cb8cc616d6875752a\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:37.408637 kubelet[3441]: I0213 20:45:37.408536 3441 apiserver.go:52] "Watching apiserver" Feb 13 20:45:37.424606 kubelet[3441]: I0213 20:45:37.424565 3441 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:45:37.534641 kubelet[3441]: W0213 20:45:37.534602 3441 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:45:37.535431 kubelet[3441]: E0213 20:45:37.534846 3441 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-1c3e1e2868\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" Feb 13 20:45:37.569631 kubelet[3441]: I0213 20:45:37.569075 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-1c3e1e2868" podStartSLOduration=1.569057728 podStartE2EDuration="1.569057728s" podCreationTimestamp="2025-02-13 20:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:37.54776362 +0000 UTC m=+1.204426564" watchObservedRunningTime="2025-02-13 20:45:37.569057728 +0000 UTC m=+1.225720672" Feb 13 20:45:37.587599 kubelet[3441]: I0213 20:45:37.587540 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-1c3e1e2868" podStartSLOduration=1.5875213129999999 podStartE2EDuration="1.587521313s" podCreationTimestamp="2025-02-13 20:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:37.569269169 +0000 UTC m=+1.225932113" watchObservedRunningTime="2025-02-13 20:45:37.587521313 +0000 UTC m=+1.244184257" Feb 13 20:45:37.621311 kubelet[3441]: I0213 20:45:37.621222 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-1c3e1e2868" podStartSLOduration=1.621206758 podStartE2EDuration="1.621206758s" podCreationTimestamp="2025-02-13 20:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:37.588906395 +0000 UTC m=+1.245569339" watchObservedRunningTime="2025-02-13 20:45:37.621206758 +0000 UTC m=+1.277869702" Feb 13 20:45:41.470614 sudo[2370]: pam_unix(sudo:session): session closed for user root Feb 13 20:45:41.542472 sshd[2366]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:41.545841 systemd-logind[1768]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:45:41.546469 systemd[1]: sshd@6-10.200.20.21:22-10.200.16.10:54178.service: Deactivated successfully. Feb 13 20:45:41.551384 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:45:41.553667 systemd-logind[1768]: Removed session 9. Feb 13 20:45:51.071597 kubelet[3441]: I0213 20:45:51.071493 3441 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:45:51.073065 kubelet[3441]: I0213 20:45:51.072796 3441 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:45:51.073104 containerd[1814]: time="2025-02-13T20:45:51.072310196Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:45:52.109954 kubelet[3441]: I0213 20:45:52.108293 3441 topology_manager.go:215] "Topology Admit Handler" podUID="5df3ded6-ac5c-4c4a-afbb-a3fe376f551c" podNamespace="kube-system" podName="kube-proxy-hqfdt" Feb 13 20:45:52.235411 kubelet[3441]: I0213 20:45:52.235363 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5df3ded6-ac5c-4c4a-afbb-a3fe376f551c-kube-proxy\") pod \"kube-proxy-hqfdt\" (UID: \"5df3ded6-ac5c-4c4a-afbb-a3fe376f551c\") " pod="kube-system/kube-proxy-hqfdt" Feb 13 20:45:52.235411 kubelet[3441]: I0213 20:45:52.235413 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5df3ded6-ac5c-4c4a-afbb-a3fe376f551c-xtables-lock\") pod \"kube-proxy-hqfdt\" (UID: \"5df3ded6-ac5c-4c4a-afbb-a3fe376f551c\") " pod="kube-system/kube-proxy-hqfdt" Feb 13 20:45:52.235584 kubelet[3441]: I0213 20:45:52.235430 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5df3ded6-ac5c-4c4a-afbb-a3fe376f551c-lib-modules\") pod \"kube-proxy-hqfdt\" (UID: \"5df3ded6-ac5c-4c4a-afbb-a3fe376f551c\") " pod="kube-system/kube-proxy-hqfdt" Feb 13 20:45:52.235584 kubelet[3441]: I0213 20:45:52.235446 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpq48\" (UniqueName: \"kubernetes.io/projected/5df3ded6-ac5c-4c4a-afbb-a3fe376f551c-kube-api-access-zpq48\") pod \"kube-proxy-hqfdt\" (UID: \"5df3ded6-ac5c-4c4a-afbb-a3fe376f551c\") " pod="kube-system/kube-proxy-hqfdt" Feb 13 20:45:52.241903 kubelet[3441]: I0213 20:45:52.241248 3441 topology_manager.go:215] "Topology Admit Handler" podUID="417f9075-a6ca-4e15-a729-e1884804f341" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-4cmt7" Feb 13 20:45:52.415063 containerd[1814]: time="2025-02-13T20:45:52.414618685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqfdt,Uid:5df3ded6-ac5c-4c4a-afbb-a3fe376f551c,Namespace:kube-system,Attempt:0,}" Feb 13 20:45:52.436065 kubelet[3441]: I0213 20:45:52.435948 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/417f9075-a6ca-4e15-a729-e1884804f341-var-lib-calico\") pod \"tigera-operator-7bc55997bb-4cmt7\" (UID: \"417f9075-a6ca-4e15-a729-e1884804f341\") " pod="tigera-operator/tigera-operator-7bc55997bb-4cmt7" Feb 13 20:45:52.436065 kubelet[3441]: I0213 20:45:52.435989 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rm2v\" (UniqueName: \"kubernetes.io/projected/417f9075-a6ca-4e15-a729-e1884804f341-kube-api-access-5rm2v\") pod \"tigera-operator-7bc55997bb-4cmt7\" (UID: \"417f9075-a6ca-4e15-a729-e1884804f341\") " pod="tigera-operator/tigera-operator-7bc55997bb-4cmt7" Feb 13 20:45:52.467513 containerd[1814]: time="2025-02-13T20:45:52.467230035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:52.467513 containerd[1814]: time="2025-02-13T20:45:52.467293755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:52.467513 containerd[1814]: time="2025-02-13T20:45:52.467323195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:52.467513 containerd[1814]: time="2025-02-13T20:45:52.467413075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:52.506447 containerd[1814]: time="2025-02-13T20:45:52.506405166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqfdt,Uid:5df3ded6-ac5c-4c4a-afbb-a3fe376f551c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4659fdd23f5c0dd6c0c9f3ef4cea7273e7b14a9eedde7d766c75ac469dc84007\"" Feb 13 20:45:52.509895 containerd[1814]: time="2025-02-13T20:45:52.509856651Z" level=info msg="CreateContainer within sandbox \"4659fdd23f5c0dd6c0c9f3ef4cea7273e7b14a9eedde7d766c75ac469dc84007\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:45:52.558879 containerd[1814]: time="2025-02-13T20:45:52.558830795Z" level=info msg="CreateContainer within sandbox \"4659fdd23f5c0dd6c0c9f3ef4cea7273e7b14a9eedde7d766c75ac469dc84007\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51a4e39aff2a72f593ccd1b700bbbbd147ecd9c260c52ef45939463539165a5e\"" Feb 13 20:45:52.560249 containerd[1814]: time="2025-02-13T20:45:52.560196717Z" level=info msg="StartContainer for \"51a4e39aff2a72f593ccd1b700bbbbd147ecd9c260c52ef45939463539165a5e\"" Feb 13 20:45:52.613693 containerd[1814]: time="2025-02-13T20:45:52.613634588Z" level=info msg="StartContainer for \"51a4e39aff2a72f593ccd1b700bbbbd147ecd9c260c52ef45939463539165a5e\" returns successfully" Feb 13 20:45:52.846698 containerd[1814]: time="2025-02-13T20:45:52.846312014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4cmt7,Uid:417f9075-a6ca-4e15-a729-e1884804f341,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:45:52.895214 containerd[1814]: time="2025-02-13T20:45:52.894571678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:52.895806 containerd[1814]: time="2025-02-13T20:45:52.895589159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:52.895806 containerd[1814]: time="2025-02-13T20:45:52.895609959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:52.895806 containerd[1814]: time="2025-02-13T20:45:52.895718040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:52.947646 containerd[1814]: time="2025-02-13T20:45:52.947604108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4cmt7,Uid:417f9075-a6ca-4e15-a729-e1884804f341,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"32dc38366f7d46de960c50a4a67a00a4b81a82fbff08c918f36b8382106bdda8\"" Feb 13 20:45:52.950049 containerd[1814]: time="2025-02-13T20:45:52.949706871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:45:53.556112 kubelet[3441]: I0213 20:45:53.555790 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hqfdt" podStartSLOduration=1.555771069 podStartE2EDuration="1.555771069s" podCreationTimestamp="2025-02-13 20:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:53.555691789 +0000 UTC m=+17.212354733" watchObservedRunningTime="2025-02-13 20:45:53.555771069 +0000 UTC m=+17.212434013" Feb 13 20:45:55.022598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784977064.mount: Deactivated successfully. Feb 13 20:45:55.446865 containerd[1814]: time="2025-02-13T20:45:55.446811051Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:55.449326 containerd[1814]: time="2025-02-13T20:45:55.449285654Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 20:45:55.453159 containerd[1814]: time="2025-02-13T20:45:55.453091379Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:55.461485 containerd[1814]: time="2025-02-13T20:45:55.461403870Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:45:55.462487 containerd[1814]: time="2025-02-13T20:45:55.462356111Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.5126072s" Feb 13 20:45:55.462487 containerd[1814]: time="2025-02-13T20:45:55.462393871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 20:45:55.465651 containerd[1814]: time="2025-02-13T20:45:55.465486955Z" level=info msg="CreateContainer within sandbox \"32dc38366f7d46de960c50a4a67a00a4b81a82fbff08c918f36b8382106bdda8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:45:55.510341 containerd[1814]: time="2025-02-13T20:45:55.510287773Z" level=info msg="CreateContainer within sandbox \"32dc38366f7d46de960c50a4a67a00a4b81a82fbff08c918f36b8382106bdda8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab37a316b38ba400ea3718f5a9f28e95b95ede43aa004b80e9aaf8060099a51f\"" Feb 13 20:45:55.511069 containerd[1814]: time="2025-02-13T20:45:55.511038334Z" level=info msg="StartContainer for \"ab37a316b38ba400ea3718f5a9f28e95b95ede43aa004b80e9aaf8060099a51f\"" Feb 13 20:45:55.570353 containerd[1814]: time="2025-02-13T20:45:55.570304651Z" level=info msg="StartContainer for \"ab37a316b38ba400ea3718f5a9f28e95b95ede43aa004b80e9aaf8060099a51f\" returns successfully" Feb 13 20:45:56.565661 kubelet[3441]: I0213 20:45:56.565509 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-4cmt7" podStartSLOduration=2.050758935 podStartE2EDuration="4.565494298s" podCreationTimestamp="2025-02-13 20:45:52 +0000 UTC" firstStartedPulling="2025-02-13 20:45:52.94890543 +0000 UTC m=+16.605568334" lastFinishedPulling="2025-02-13 20:45:55.463640753 +0000 UTC m=+19.120303697" observedRunningTime="2025-02-13 20:45:56.565229218 +0000 UTC m=+20.221892162" watchObservedRunningTime="2025-02-13 20:45:56.565494298 +0000 UTC m=+20.222157242" Feb 13 20:45:59.556559 kubelet[3441]: I0213 20:45:59.556507 3441 topology_manager.go:215] "Topology Admit Handler" podUID="0609a105-5dd2-446f-9c3d-7b15cb1909d7" podNamespace="calico-system" podName="calico-typha-5b7c99fd4b-gfftz" Feb 13 20:45:59.680255 kubelet[3441]: I0213 20:45:59.680137 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0609a105-5dd2-446f-9c3d-7b15cb1909d7-tigera-ca-bundle\") pod \"calico-typha-5b7c99fd4b-gfftz\" (UID: \"0609a105-5dd2-446f-9c3d-7b15cb1909d7\") " pod="calico-system/calico-typha-5b7c99fd4b-gfftz" Feb 13 20:45:59.680255 kubelet[3441]: I0213 20:45:59.680180 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffr6h\" (UniqueName: \"kubernetes.io/projected/0609a105-5dd2-446f-9c3d-7b15cb1909d7-kube-api-access-ffr6h\") pod \"calico-typha-5b7c99fd4b-gfftz\" (UID: \"0609a105-5dd2-446f-9c3d-7b15cb1909d7\") " pod="calico-system/calico-typha-5b7c99fd4b-gfftz" Feb 13 20:45:59.680255 kubelet[3441]: I0213 20:45:59.680201 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0609a105-5dd2-446f-9c3d-7b15cb1909d7-typha-certs\") pod \"calico-typha-5b7c99fd4b-gfftz\" (UID: \"0609a105-5dd2-446f-9c3d-7b15cb1909d7\") " pod="calico-system/calico-typha-5b7c99fd4b-gfftz" Feb 13 20:45:59.716664 kubelet[3441]: I0213 20:45:59.714520 3441 topology_manager.go:215] "Topology Admit Handler" podUID="d94c4d4f-5966-487a-a5ea-3009d5386db9" podNamespace="calico-system" podName="calico-node-njn8q" Feb 13 20:45:59.839690 kubelet[3441]: I0213 20:45:59.839565 3441 topology_manager.go:215] "Topology Admit Handler" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" podNamespace="calico-system" podName="csi-node-driver-897cn" Feb 13 20:45:59.840087 kubelet[3441]: E0213 20:45:59.839848 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-897cn" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" Feb 13 20:45:59.862664 containerd[1814]: time="2025-02-13T20:45:59.862280164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b7c99fd4b-gfftz,Uid:0609a105-5dd2-446f-9c3d-7b15cb1909d7,Namespace:calico-system,Attempt:0,}" Feb 13 20:45:59.880983 kubelet[3441]: I0213 20:45:59.880947 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-cni-net-dir\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.881787 kubelet[3441]: I0213 20:45:59.881305 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-flexvol-driver-host\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.881787 kubelet[3441]: I0213 20:45:59.881347 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-xtables-lock\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.881787 kubelet[3441]: I0213 20:45:59.881367 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-lib-modules\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.881787 kubelet[3441]: I0213 20:45:59.881415 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmlk\" (UniqueName: \"kubernetes.io/projected/d94c4d4f-5966-487a-a5ea-3009d5386db9-kube-api-access-psmlk\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.881787 kubelet[3441]: I0213 20:45:59.881441 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-policysync\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.882203 kubelet[3441]: I0213 20:45:59.881461 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-var-run-calico\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.882203 kubelet[3441]: I0213 20:45:59.881478 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-cni-log-dir\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.882203 kubelet[3441]: I0213 20:45:59.881496 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-cni-bin-dir\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.882203 kubelet[3441]: I0213 20:45:59.881518 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d94c4d4f-5966-487a-a5ea-3009d5386db9-tigera-ca-bundle\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.882203 kubelet[3441]: I0213 20:45:59.881535 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d94c4d4f-5966-487a-a5ea-3009d5386db9-node-certs\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.882308 kubelet[3441]: I0213 20:45:59.881551 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d94c4d4f-5966-487a-a5ea-3009d5386db9-var-lib-calico\") pod \"calico-node-njn8q\" (UID: \"d94c4d4f-5966-487a-a5ea-3009d5386db9\") " pod="calico-system/calico-node-njn8q" Feb 13 20:45:59.926854 containerd[1814]: time="2025-02-13T20:45:59.926158606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:45:59.926854 containerd[1814]: time="2025-02-13T20:45:59.926228726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:45:59.926854 containerd[1814]: time="2025-02-13T20:45:59.926245406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:59.926854 containerd[1814]: time="2025-02-13T20:45:59.926349487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:45:59.983822 kubelet[3441]: I0213 20:45:59.981930 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c975c533-a1a9-45d1-9ae6-e3e1fc2a3401-socket-dir\") pod \"csi-node-driver-897cn\" (UID: \"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401\") " pod="calico-system/csi-node-driver-897cn" Feb 13 20:45:59.983822 kubelet[3441]: I0213 20:45:59.981976 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb9v2\" (UniqueName: \"kubernetes.io/projected/c975c533-a1a9-45d1-9ae6-e3e1fc2a3401-kube-api-access-qb9v2\") pod \"csi-node-driver-897cn\" (UID: \"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401\") " pod="calico-system/csi-node-driver-897cn" Feb 13 20:45:59.983822 kubelet[3441]: I0213 20:45:59.982104 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c975c533-a1a9-45d1-9ae6-e3e1fc2a3401-kubelet-dir\") pod \"csi-node-driver-897cn\" (UID: \"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401\") " pod="calico-system/csi-node-driver-897cn" Feb 13 20:45:59.983822 kubelet[3441]: I0213 20:45:59.982140 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c975c533-a1a9-45d1-9ae6-e3e1fc2a3401-varrun\") pod \"csi-node-driver-897cn\" (UID: \"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401\") " pod="calico-system/csi-node-driver-897cn" Feb 13 20:45:59.983822 kubelet[3441]: I0213 20:45:59.982156 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c975c533-a1a9-45d1-9ae6-e3e1fc2a3401-registration-dir\") pod \"csi-node-driver-897cn\" (UID: \"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401\") " pod="calico-system/csi-node-driver-897cn" Feb 13 20:45:59.987904 kubelet[3441]: E0213 20:45:59.987870 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:45:59.988186 kubelet[3441]: W0213 20:45:59.988068 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:45:59.988186 kubelet[3441]: E0213 20:45:59.988097 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:45:59.997075 containerd[1814]: time="2025-02-13T20:45:59.996933658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b7c99fd4b-gfftz,Uid:0609a105-5dd2-446f-9c3d-7b15cb1909d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8dd00aa3884580473c9e1305b683f0b77cdb6afb1ca7973607ab6037f49abe5\"" Feb 13 20:46:00.000611 containerd[1814]: time="2025-02-13T20:46:00.000196142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:46:00.003664 kubelet[3441]: E0213 20:46:00.003584 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.003664 kubelet[3441]: W0213 20:46:00.003605 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.003664 kubelet[3441]: E0213 20:46:00.003624 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.025786 containerd[1814]: time="2025-02-13T20:46:00.025740535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-njn8q,Uid:d94c4d4f-5966-487a-a5ea-3009d5386db9,Namespace:calico-system,Attempt:0,}" Feb 13 20:46:00.079709 containerd[1814]: time="2025-02-13T20:46:00.079555125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:46:00.080971 containerd[1814]: time="2025-02-13T20:46:00.080348646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:46:00.080971 containerd[1814]: time="2025-02-13T20:46:00.080380606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:00.080971 containerd[1814]: time="2025-02-13T20:46:00.080543686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:00.083464 kubelet[3441]: E0213 20:46:00.083429 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.083464 kubelet[3441]: W0213 20:46:00.083457 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.083591 kubelet[3441]: E0213 20:46:00.083478 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.083828 kubelet[3441]: E0213 20:46:00.083808 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.083828 kubelet[3441]: W0213 20:46:00.083827 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.083938 kubelet[3441]: E0213 20:46:00.083857 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.084502 kubelet[3441]: E0213 20:46:00.084483 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.084502 kubelet[3441]: W0213 20:46:00.084501 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.084623 kubelet[3441]: E0213 20:46:00.084533 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.084748 kubelet[3441]: E0213 20:46:00.084729 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.084748 kubelet[3441]: W0213 20:46:00.084742 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.084812 kubelet[3441]: E0213 20:46:00.084756 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.084992 kubelet[3441]: E0213 20:46:00.084973 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.084992 kubelet[3441]: W0213 20:46:00.084987 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.085205 kubelet[3441]: E0213 20:46:00.085130 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.085397 kubelet[3441]: E0213 20:46:00.085312 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.085397 kubelet[3441]: W0213 20:46:00.085327 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.085510 kubelet[3441]: E0213 20:46:00.085481 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.085510 kubelet[3441]: E0213 20:46:00.085505 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.085687 kubelet[3441]: W0213 20:46:00.085512 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.085687 kubelet[3441]: E0213 20:46:00.085541 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.085776 kubelet[3441]: E0213 20:46:00.085756 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.085776 kubelet[3441]: W0213 20:46:00.085766 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.085776 kubelet[3441]: E0213 20:46:00.085792 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.086126 kubelet[3441]: E0213 20:46:00.086096 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.086126 kubelet[3441]: W0213 20:46:00.086115 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.086126 kubelet[3441]: E0213 20:46:00.086138 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.086443 kubelet[3441]: E0213 20:46:00.086311 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.086443 kubelet[3441]: W0213 20:46:00.086320 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.086680 kubelet[3441]: E0213 20:46:00.086375 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.087268 kubelet[3441]: E0213 20:46:00.087250 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.087471 kubelet[3441]: W0213 20:46:00.087354 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.087471 kubelet[3441]: E0213 20:46:00.087443 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.087817 kubelet[3441]: E0213 20:46:00.087771 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.087817 kubelet[3441]: W0213 20:46:00.087783 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.087979 kubelet[3441]: E0213 20:46:00.087901 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.088339 kubelet[3441]: E0213 20:46:00.088245 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.088339 kubelet[3441]: W0213 20:46:00.088263 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.088551 kubelet[3441]: E0213 20:46:00.088429 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.088759 kubelet[3441]: E0213 20:46:00.088628 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.088759 kubelet[3441]: W0213 20:46:00.088657 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.088759 kubelet[3441]: E0213 20:46:00.088725 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.089051 kubelet[3441]: E0213 20:46:00.089002 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.089051 kubelet[3441]: W0213 20:46:00.089037 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.089276 kubelet[3441]: E0213 20:46:00.089209 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.089461 kubelet[3441]: E0213 20:46:00.089449 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.089616 kubelet[3441]: W0213 20:46:00.089541 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.089757 kubelet[3441]: E0213 20:46:00.089645 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.090106 kubelet[3441]: E0213 20:46:00.089959 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.090106 kubelet[3441]: W0213 20:46:00.089971 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.090106 kubelet[3441]: E0213 20:46:00.090082 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.090349 kubelet[3441]: E0213 20:46:00.090252 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.090349 kubelet[3441]: W0213 20:46:00.090261 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.090533 kubelet[3441]: E0213 20:46:00.090445 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.090533 kubelet[3441]: E0213 20:46:00.090516 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.090533 kubelet[3441]: W0213 20:46:00.090522 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.090765 kubelet[3441]: E0213 20:46:00.090702 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.090927 kubelet[3441]: E0213 20:46:00.090904 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.090927 kubelet[3441]: W0213 20:46:00.090915 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.091203 kubelet[3441]: E0213 20:46:00.091172 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.091497 kubelet[3441]: E0213 20:46:00.091268 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.091497 kubelet[3441]: W0213 20:46:00.091276 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.091497 kubelet[3441]: E0213 20:46:00.091289 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.093424 kubelet[3441]: E0213 20:46:00.092566 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.093424 kubelet[3441]: W0213 20:46:00.093309 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.093826 kubelet[3441]: E0213 20:46:00.093535 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.097337 kubelet[3441]: E0213 20:46:00.097294 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.098690 kubelet[3441]: W0213 20:46:00.097557 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.102367 kubelet[3441]: E0213 20:46:00.101754 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.102367 kubelet[3441]: W0213 20:46:00.101800 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.103861 kubelet[3441]: E0213 20:46:00.103077 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.103861 kubelet[3441]: W0213 20:46:00.103093 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.103861 kubelet[3441]: E0213 20:46:00.103142 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.104502 kubelet[3441]: E0213 20:46:00.104453 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.105162 kubelet[3441]: E0213 20:46:00.104195 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.119240 kubelet[3441]: E0213 20:46:00.119139 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:00.119240 kubelet[3441]: W0213 20:46:00.119163 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:00.119707 kubelet[3441]: E0213 20:46:00.119678 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:00.120806 containerd[1814]: time="2025-02-13T20:46:00.120770898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-njn8q,Uid:d94c4d4f-5966-487a-a5ea-3009d5386db9,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fc08170526ad6307843d0888c58c60effe968564009fdec121aaa5e74ffd0e1\"" Feb 13 20:46:00.790311 systemd[1]: run-containerd-runc-k8s.io-a8dd00aa3884580473c9e1305b683f0b77cdb6afb1ca7973607ab6037f49abe5-runc.3cGKus.mount: Deactivated successfully. Feb 13 20:46:01.335383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383552887.mount: Deactivated successfully. Feb 13 20:46:01.437077 kubelet[3441]: E0213 20:46:01.436520 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-897cn" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" Feb 13 20:46:01.873115 containerd[1814]: time="2025-02-13T20:46:01.872915765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:01.875770 containerd[1814]: time="2025-02-13T20:46:01.875719729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 20:46:01.880990 containerd[1814]: time="2025-02-13T20:46:01.880918015Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:01.885252 containerd[1814]: time="2025-02-13T20:46:01.885187501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:01.886094 containerd[1814]: time="2025-02-13T20:46:01.885948822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.8857134s" Feb 13 20:46:01.886094 containerd[1814]: time="2025-02-13T20:46:01.885988902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 20:46:01.887599 containerd[1814]: time="2025-02-13T20:46:01.887461304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:46:01.902866 containerd[1814]: time="2025-02-13T20:46:01.902819084Z" level=info msg="CreateContainer within sandbox \"a8dd00aa3884580473c9e1305b683f0b77cdb6afb1ca7973607ab6037f49abe5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:46:01.955056 containerd[1814]: time="2025-02-13T20:46:01.954984431Z" level=info msg="CreateContainer within sandbox \"a8dd00aa3884580473c9e1305b683f0b77cdb6afb1ca7973607ab6037f49abe5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2a0c8fed0a4921ebc8b7f165601d1f3674e7d0dbd9419ce76d47c2ac9180d3b3\"" Feb 13 20:46:01.956126 containerd[1814]: time="2025-02-13T20:46:01.955570312Z" level=info msg="StartContainer for \"2a0c8fed0a4921ebc8b7f165601d1f3674e7d0dbd9419ce76d47c2ac9180d3b3\"" Feb 13 20:46:02.017241 containerd[1814]: time="2025-02-13T20:46:02.017195712Z" level=info msg="StartContainer for \"2a0c8fed0a4921ebc8b7f165601d1f3674e7d0dbd9419ce76d47c2ac9180d3b3\" returns successfully" Feb 13 20:46:02.589605 kubelet[3441]: I0213 20:46:02.589498 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b7c99fd4b-gfftz" podStartSLOduration=1.7017201690000001 podStartE2EDuration="3.589465572s" podCreationTimestamp="2025-02-13 20:45:59 +0000 UTC" firstStartedPulling="2025-02-13 20:45:59.999599661 +0000 UTC m=+23.656262605" lastFinishedPulling="2025-02-13 20:46:01.887345064 +0000 UTC m=+25.544008008" observedRunningTime="2025-02-13 20:46:02.589177771 +0000 UTC m=+26.245840715" watchObservedRunningTime="2025-02-13 20:46:02.589465572 +0000 UTC m=+26.246128516" Feb 13 20:46:02.606459 kubelet[3441]: E0213 20:46:02.606387 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.606459 kubelet[3441]: W0213 20:46:02.606454 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.606611 kubelet[3441]: E0213 20:46:02.606477 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.606687 kubelet[3441]: E0213 20:46:02.606655 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.606687 kubelet[3441]: W0213 20:46:02.606672 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.606687 kubelet[3441]: E0213 20:46:02.606681 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.606845 kubelet[3441]: E0213 20:46:02.606829 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.606845 kubelet[3441]: W0213 20:46:02.606841 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.606904 kubelet[3441]: E0213 20:46:02.606849 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.607005 kubelet[3441]: E0213 20:46:02.606992 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.607061 kubelet[3441]: W0213 20:46:02.607003 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.607061 kubelet[3441]: E0213 20:46:02.607032 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.607208 kubelet[3441]: E0213 20:46:02.607195 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.607208 kubelet[3441]: W0213 20:46:02.607207 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.607265 kubelet[3441]: E0213 20:46:02.607216 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.607364 kubelet[3441]: E0213 20:46:02.607352 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.607398 kubelet[3441]: W0213 20:46:02.607366 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.607398 kubelet[3441]: E0213 20:46:02.607374 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.607980 kubelet[3441]: E0213 20:46:02.607961 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.607980 kubelet[3441]: W0213 20:46:02.607979 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.608063 kubelet[3441]: E0213 20:46:02.607992 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.608216 kubelet[3441]: E0213 20:46:02.608202 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.608216 kubelet[3441]: W0213 20:46:02.608214 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.608271 kubelet[3441]: E0213 20:46:02.608224 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.608411 kubelet[3441]: E0213 20:46:02.608398 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.608442 kubelet[3441]: W0213 20:46:02.608419 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.608442 kubelet[3441]: E0213 20:46:02.608428 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.608586 kubelet[3441]: E0213 20:46:02.608573 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.608586 kubelet[3441]: W0213 20:46:02.608585 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.608649 kubelet[3441]: E0213 20:46:02.608593 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.608745 kubelet[3441]: E0213 20:46:02.608733 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.608745 kubelet[3441]: W0213 20:46:02.608744 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.608801 kubelet[3441]: E0213 20:46:02.608752 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.608911 kubelet[3441]: E0213 20:46:02.608898 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.608911 kubelet[3441]: W0213 20:46:02.608910 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.608962 kubelet[3441]: E0213 20:46:02.608918 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.609148 kubelet[3441]: E0213 20:46:02.609134 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.609148 kubelet[3441]: W0213 20:46:02.609147 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.609214 kubelet[3441]: E0213 20:46:02.609156 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.609408 kubelet[3441]: E0213 20:46:02.609392 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.609408 kubelet[3441]: W0213 20:46:02.609406 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.609483 kubelet[3441]: E0213 20:46:02.609416 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.609603 kubelet[3441]: E0213 20:46:02.609589 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.609603 kubelet[3441]: W0213 20:46:02.609602 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.609662 kubelet[3441]: E0213 20:46:02.609611 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.701324 kubelet[3441]: E0213 20:46:02.701244 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.701324 kubelet[3441]: W0213 20:46:02.701266 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.701324 kubelet[3441]: E0213 20:46:02.701284 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.701644 kubelet[3441]: E0213 20:46:02.701464 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.701644 kubelet[3441]: W0213 20:46:02.701473 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.701644 kubelet[3441]: E0213 20:46:02.701489 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.701644 kubelet[3441]: E0213 20:46:02.701639 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.701644 kubelet[3441]: W0213 20:46:02.701647 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.701794 kubelet[3441]: E0213 20:46:02.701661 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.701835 kubelet[3441]: E0213 20:46:02.701814 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.701835 kubelet[3441]: W0213 20:46:02.701827 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.701835 kubelet[3441]: E0213 20:46:02.701841 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.702197 kubelet[3441]: E0213 20:46:02.702106 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.702197 kubelet[3441]: W0213 20:46:02.702122 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.702197 kubelet[3441]: E0213 20:46:02.702143 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.702493 kubelet[3441]: E0213 20:46:02.702428 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.702493 kubelet[3441]: W0213 20:46:02.702439 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.702493 kubelet[3441]: E0213 20:46:02.702457 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.702814 kubelet[3441]: E0213 20:46:02.702721 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.702814 kubelet[3441]: W0213 20:46:02.702732 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.702814 kubelet[3441]: E0213 20:46:02.702749 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.703124 kubelet[3441]: E0213 20:46:02.703029 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.703124 kubelet[3441]: W0213 20:46:02.703053 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.703124 kubelet[3441]: E0213 20:46:02.703076 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.703463 kubelet[3441]: E0213 20:46:02.703356 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.703463 kubelet[3441]: W0213 20:46:02.703386 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.703606 kubelet[3441]: E0213 20:46:02.703551 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.703732 kubelet[3441]: E0213 20:46:02.703721 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.703836 kubelet[3441]: W0213 20:46:02.703783 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.703836 kubelet[3441]: E0213 20:46:02.703818 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.704268 kubelet[3441]: E0213 20:46:02.704110 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.704268 kubelet[3441]: W0213 20:46:02.704123 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.704268 kubelet[3441]: E0213 20:46:02.704142 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.704522 kubelet[3441]: E0213 20:46:02.704455 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.704522 kubelet[3441]: W0213 20:46:02.704466 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.704522 kubelet[3441]: E0213 20:46:02.704483 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.704759 kubelet[3441]: E0213 20:46:02.704742 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.704759 kubelet[3441]: W0213 20:46:02.704758 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.704818 kubelet[3441]: E0213 20:46:02.704778 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.705005 kubelet[3441]: E0213 20:46:02.704990 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.705005 kubelet[3441]: W0213 20:46:02.705003 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.705106 kubelet[3441]: E0213 20:46:02.705033 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.705438 kubelet[3441]: E0213 20:46:02.705319 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.705438 kubelet[3441]: W0213 20:46:02.705331 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.705438 kubelet[3441]: E0213 20:46:02.705351 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.705766 kubelet[3441]: E0213 20:46:02.705624 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.705766 kubelet[3441]: W0213 20:46:02.705638 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.705766 kubelet[3441]: E0213 20:46:02.705658 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.705984 kubelet[3441]: E0213 20:46:02.705867 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.705984 kubelet[3441]: W0213 20:46:02.705883 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.705984 kubelet[3441]: E0213 20:46:02.705895 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:02.706276 kubelet[3441]: E0213 20:46:02.706262 3441 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:46:02.706372 kubelet[3441]: W0213 20:46:02.706334 3441 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:46:02.706372 kubelet[3441]: E0213 20:46:02.706350 3441 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:46:03.241329 containerd[1814]: time="2025-02-13T20:46:03.241280374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:03.244814 containerd[1814]: time="2025-02-13T20:46:03.244761819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 20:46:03.250642 containerd[1814]: time="2025-02-13T20:46:03.250472506Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:03.258247 containerd[1814]: time="2025-02-13T20:46:03.258169556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:03.259025 containerd[1814]: time="2025-02-13T20:46:03.258881917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.371384293s" Feb 13 20:46:03.259025 containerd[1814]: time="2025-02-13T20:46:03.258917157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 20:46:03.263336 containerd[1814]: time="2025-02-13T20:46:03.263298403Z" level=info msg="CreateContainer within sandbox \"1fc08170526ad6307843d0888c58c60effe968564009fdec121aaa5e74ffd0e1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:46:03.305279 containerd[1814]: time="2025-02-13T20:46:03.305230457Z" level=info msg="CreateContainer within sandbox \"1fc08170526ad6307843d0888c58c60effe968564009fdec121aaa5e74ffd0e1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c1a94a5fa403727bede2e4805071483f298e144abc3fda6d4c1fe6f25553564c\"" Feb 13 20:46:03.305892 containerd[1814]: time="2025-02-13T20:46:03.305834018Z" level=info msg="StartContainer for \"c1a94a5fa403727bede2e4805071483f298e144abc3fda6d4c1fe6f25553564c\"" Feb 13 20:46:03.362605 containerd[1814]: time="2025-02-13T20:46:03.362559611Z" level=info msg="StartContainer for \"c1a94a5fa403727bede2e4805071483f298e144abc3fda6d4c1fe6f25553564c\" returns successfully" Feb 13 20:46:03.436603 kubelet[3441]: E0213 20:46:03.436547 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-897cn" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" Feb 13 20:46:03.576208 kubelet[3441]: I0213 20:46:03.575331 3441 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:03.892464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1a94a5fa403727bede2e4805071483f298e144abc3fda6d4c1fe6f25553564c-rootfs.mount: Deactivated successfully. Feb 13 20:46:04.335804 containerd[1814]: time="2025-02-13T20:46:04.335669109Z" level=info msg="shim disconnected" id=c1a94a5fa403727bede2e4805071483f298e144abc3fda6d4c1fe6f25553564c namespace=k8s.io Feb 13 20:46:04.335804 containerd[1814]: time="2025-02-13T20:46:04.335722869Z" level=warning msg="cleaning up after shim disconnected" id=c1a94a5fa403727bede2e4805071483f298e144abc3fda6d4c1fe6f25553564c namespace=k8s.io Feb 13 20:46:04.335804 containerd[1814]: time="2025-02-13T20:46:04.335731109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:46:04.580724 containerd[1814]: time="2025-02-13T20:46:04.580297226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:46:05.436446 kubelet[3441]: E0213 20:46:05.436373 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-897cn" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" Feb 13 20:46:07.429930 containerd[1814]: time="2025-02-13T20:46:07.429218229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:07.431452 containerd[1814]: time="2025-02-13T20:46:07.431418672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 20:46:07.434649 containerd[1814]: time="2025-02-13T20:46:07.434622476Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:07.436325 kubelet[3441]: E0213 20:46:07.436241 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-897cn" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" Feb 13 20:46:07.439751 containerd[1814]: time="2025-02-13T20:46:07.439545402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.859165376s" Feb 13 20:46:07.439751 containerd[1814]: time="2025-02-13T20:46:07.439592922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 20:46:07.439751 containerd[1814]: time="2025-02-13T20:46:07.439617562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:07.442674 containerd[1814]: time="2025-02-13T20:46:07.442483926Z" level=info msg="CreateContainer within sandbox \"1fc08170526ad6307843d0888c58c60effe968564009fdec121aaa5e74ffd0e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:46:07.488102 containerd[1814]: time="2025-02-13T20:46:07.488054905Z" level=info msg="CreateContainer within sandbox \"1fc08170526ad6307843d0888c58c60effe968564009fdec121aaa5e74ffd0e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b901af6fa5d9c9b4097a6c5cb39e12a0dcf7780eef82c7f4b6316c11521c9ce\"" Feb 13 20:46:07.489032 containerd[1814]: time="2025-02-13T20:46:07.488948866Z" level=info msg="StartContainer for \"1b901af6fa5d9c9b4097a6c5cb39e12a0dcf7780eef82c7f4b6316c11521c9ce\"" Feb 13 20:46:07.543504 containerd[1814]: time="2025-02-13T20:46:07.543343456Z" level=info msg="StartContainer for \"1b901af6fa5d9c9b4097a6c5cb39e12a0dcf7780eef82c7f4b6316c11521c9ce\" returns successfully" Feb 13 20:46:08.637482 containerd[1814]: time="2025-02-13T20:46:08.637435111Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:46:08.659244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b901af6fa5d9c9b4097a6c5cb39e12a0dcf7780eef82c7f4b6316c11521c9ce-rootfs.mount: Deactivated successfully. Feb 13 20:46:08.705818 kubelet[3441]: I0213 20:46:08.705777 3441 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:46:08.741768 kubelet[3441]: I0213 20:46:08.741162 3441 topology_manager.go:215] "Topology Admit Handler" podUID="93087e5e-d8ec-437a-b934-6999a2c23c2f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9vpwd" Feb 13 20:46:08.751382 kubelet[3441]: I0213 20:46:08.748056 3441 topology_manager.go:215] "Topology Admit Handler" podUID="8b4196eb-15cf-4412-9d84-5406140b93bb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-79ckq" Feb 13 20:46:08.757457 kubelet[3441]: I0213 20:46:08.757173 3441 topology_manager.go:215] "Topology Admit Handler" podUID="29dedd29-2cd0-4e89-b378-f68464171a26" podNamespace="calico-system" podName="calico-kube-controllers-597b588699-srxcs" Feb 13 20:46:08.757826 kubelet[3441]: I0213 20:46:08.757801 3441 topology_manager.go:215] "Topology Admit Handler" podUID="c48df888-6f83-4248-848b-c107d69d27c0" podNamespace="calico-apiserver" podName="calico-apiserver-6868ddd855-gbjvf" Feb 13 20:46:08.758571 kubelet[3441]: I0213 20:46:08.758546 3441 topology_manager.go:215] "Topology Admit Handler" podUID="8e04ade4-b716-4362-a7eb-ded575d07b9c" podNamespace="calico-apiserver" podName="calico-apiserver-6868ddd855-k47dx" Feb 13 20:46:08.845614 kubelet[3441]: I0213 20:46:08.845571 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6lv\" (UniqueName: \"kubernetes.io/projected/93087e5e-d8ec-437a-b934-6999a2c23c2f-kube-api-access-zg6lv\") pod \"coredns-7db6d8ff4d-9vpwd\" (UID: \"93087e5e-d8ec-437a-b934-6999a2c23c2f\") " pod="kube-system/coredns-7db6d8ff4d-9vpwd" Feb 13 20:46:08.845764 kubelet[3441]: I0213 20:46:08.845662 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93087e5e-d8ec-437a-b934-6999a2c23c2f-config-volume\") pod \"coredns-7db6d8ff4d-9vpwd\" (UID: \"93087e5e-d8ec-437a-b934-6999a2c23c2f\") " pod="kube-system/coredns-7db6d8ff4d-9vpwd" Feb 13 20:46:08.845764 kubelet[3441]: I0213 20:46:08.845721 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b4196eb-15cf-4412-9d84-5406140b93bb-config-volume\") pod \"coredns-7db6d8ff4d-79ckq\" (UID: \"8b4196eb-15cf-4412-9d84-5406140b93bb\") " pod="kube-system/coredns-7db6d8ff4d-79ckq" Feb 13 20:46:08.845764 kubelet[3441]: I0213 20:46:08.845743 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtszp\" (UniqueName: \"kubernetes.io/projected/8b4196eb-15cf-4412-9d84-5406140b93bb-kube-api-access-rtszp\") pod \"coredns-7db6d8ff4d-79ckq\" (UID: \"8b4196eb-15cf-4412-9d84-5406140b93bb\") " pod="kube-system/coredns-7db6d8ff4d-79ckq" Feb 13 20:46:08.946769 kubelet[3441]: I0213 20:46:08.946178 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfbl\" (UniqueName: \"kubernetes.io/projected/c48df888-6f83-4248-848b-c107d69d27c0-kube-api-access-7rfbl\") pod \"calico-apiserver-6868ddd855-gbjvf\" (UID: \"c48df888-6f83-4248-848b-c107d69d27c0\") " pod="calico-apiserver/calico-apiserver-6868ddd855-gbjvf" Feb 13 20:46:08.946769 kubelet[3441]: I0213 20:46:08.946724 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29dedd29-2cd0-4e89-b378-f68464171a26-tigera-ca-bundle\") pod \"calico-kube-controllers-597b588699-srxcs\" (UID: \"29dedd29-2cd0-4e89-b378-f68464171a26\") " pod="calico-system/calico-kube-controllers-597b588699-srxcs" Feb 13 20:46:08.946769 kubelet[3441]: I0213 20:46:08.946747 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e04ade4-b716-4362-a7eb-ded575d07b9c-calico-apiserver-certs\") pod \"calico-apiserver-6868ddd855-k47dx\" (UID: \"8e04ade4-b716-4362-a7eb-ded575d07b9c\") " pod="calico-apiserver/calico-apiserver-6868ddd855-k47dx" Feb 13 20:46:08.947242 kubelet[3441]: I0213 20:46:08.947201 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c48df888-6f83-4248-848b-c107d69d27c0-calico-apiserver-certs\") pod \"calico-apiserver-6868ddd855-gbjvf\" (UID: \"c48df888-6f83-4248-848b-c107d69d27c0\") " pod="calico-apiserver/calico-apiserver-6868ddd855-gbjvf" Feb 13 20:46:08.947242 kubelet[3441]: I0213 20:46:08.947235 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjtp\" (UniqueName: \"kubernetes.io/projected/29dedd29-2cd0-4e89-b378-f68464171a26-kube-api-access-cxjtp\") pod \"calico-kube-controllers-597b588699-srxcs\" (UID: \"29dedd29-2cd0-4e89-b378-f68464171a26\") " pod="calico-system/calico-kube-controllers-597b588699-srxcs" Feb 13 20:46:08.947344 kubelet[3441]: I0213 20:46:08.947256 3441 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdp8\" (UniqueName: \"kubernetes.io/projected/8e04ade4-b716-4362-a7eb-ded575d07b9c-kube-api-access-wxdp8\") pod \"calico-apiserver-6868ddd855-k47dx\" (UID: \"8e04ade4-b716-4362-a7eb-ded575d07b9c\") " pod="calico-apiserver/calico-apiserver-6868ddd855-k47dx" Feb 13 20:46:09.835287 containerd[1814]: time="2025-02-13T20:46:09.834751219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79ckq,Uid:8b4196eb-15cf-4412-9d84-5406140b93bb,Namespace:kube-system,Attempt:0,}" Feb 13 20:46:09.835988 containerd[1814]: time="2025-02-13T20:46:09.835796300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vpwd,Uid:93087e5e-d8ec-437a-b934-6999a2c23c2f,Namespace:kube-system,Attempt:0,}" Feb 13 20:46:09.839409 containerd[1814]: time="2025-02-13T20:46:09.836443381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-897cn,Uid:c975c533-a1a9-45d1-9ae6-e3e1fc2a3401,Namespace:calico-system,Attempt:0,}" Feb 13 20:46:09.847395 containerd[1814]: time="2025-02-13T20:46:09.846596194Z" level=info msg="shim disconnected" id=1b901af6fa5d9c9b4097a6c5cb39e12a0dcf7780eef82c7f4b6316c11521c9ce namespace=k8s.io Feb 13 20:46:09.847551 containerd[1814]: time="2025-02-13T20:46:09.847526995Z" level=warning msg="cleaning up after shim disconnected" id=1b901af6fa5d9c9b4097a6c5cb39e12a0dcf7780eef82c7f4b6316c11521c9ce namespace=k8s.io Feb 13 20:46:09.848063 containerd[1814]: time="2025-02-13T20:46:09.848036956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:46:09.862047 containerd[1814]: time="2025-02-13T20:46:09.861992134Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:46:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:46:09.962546 containerd[1814]: time="2025-02-13T20:46:09.962494344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-k47dx,Uid:8e04ade4-b716-4362-a7eb-ded575d07b9c,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:46:09.964704 containerd[1814]: time="2025-02-13T20:46:09.964668387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597b588699-srxcs,Uid:29dedd29-2cd0-4e89-b378-f68464171a26,Namespace:calico-system,Attempt:0,}" Feb 13 20:46:09.966677 containerd[1814]: time="2025-02-13T20:46:09.966637149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-gbjvf,Uid:c48df888-6f83-4248-848b-c107d69d27c0,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:46:10.006125 containerd[1814]: time="2025-02-13T20:46:10.005633120Z" level=error msg="Failed to destroy network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.006125 containerd[1814]: time="2025-02-13T20:46:10.005969800Z" level=error msg="encountered an error cleaning up failed sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.006125 containerd[1814]: time="2025-02-13T20:46:10.006032800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79ckq,Uid:8b4196eb-15cf-4412-9d84-5406140b93bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.006315 kubelet[3441]: E0213 20:46:10.006238 3441 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.006608 kubelet[3441]: E0213 20:46:10.006324 3441 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-79ckq" Feb 13 20:46:10.006608 kubelet[3441]: E0213 20:46:10.006343 3441 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-79ckq" Feb 13 20:46:10.006608 kubelet[3441]: E0213 20:46:10.006379 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-79ckq_kube-system(8b4196eb-15cf-4412-9d84-5406140b93bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-79ckq_kube-system(8b4196eb-15cf-4412-9d84-5406140b93bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-79ckq" podUID="8b4196eb-15cf-4412-9d84-5406140b93bb" Feb 13 20:46:10.027979 containerd[1814]: time="2025-02-13T20:46:10.027576348Z" level=error msg="Failed to destroy network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.030398 containerd[1814]: time="2025-02-13T20:46:10.030266632Z" level=error msg="encountered an error cleaning up failed sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.030693 containerd[1814]: time="2025-02-13T20:46:10.030561192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-897cn,Uid:c975c533-a1a9-45d1-9ae6-e3e1fc2a3401,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.031700 kubelet[3441]: E0213 20:46:10.031648 3441 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.032892 kubelet[3441]: E0213 20:46:10.031713 3441 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-897cn" Feb 13 20:46:10.032892 kubelet[3441]: E0213 20:46:10.031734 3441 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-897cn" Feb 13 20:46:10.032892 kubelet[3441]: E0213 20:46:10.031776 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-897cn_calico-system(c975c533-a1a9-45d1-9ae6-e3e1fc2a3401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-897cn_calico-system(c975c533-a1a9-45d1-9ae6-e3e1fc2a3401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-897cn" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" Feb 13 20:46:10.037155 containerd[1814]: time="2025-02-13T20:46:10.037052040Z" level=error msg="Failed to destroy network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.037403 containerd[1814]: time="2025-02-13T20:46:10.037372881Z" level=error msg="encountered an error cleaning up failed sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.037441 containerd[1814]: time="2025-02-13T20:46:10.037426161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vpwd,Uid:93087e5e-d8ec-437a-b934-6999a2c23c2f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.037703 kubelet[3441]: E0213 20:46:10.037659 3441 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.037763 kubelet[3441]: E0213 20:46:10.037721 3441 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9vpwd" Feb 13 20:46:10.037763 kubelet[3441]: E0213 20:46:10.037740 3441 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9vpwd" Feb 13 20:46:10.037827 kubelet[3441]: E0213 20:46:10.037786 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9vpwd_kube-system(93087e5e-d8ec-437a-b934-6999a2c23c2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9vpwd_kube-system(93087e5e-d8ec-437a-b934-6999a2c23c2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9vpwd" podUID="93087e5e-d8ec-437a-b934-6999a2c23c2f" Feb 13 20:46:10.106901 containerd[1814]: time="2025-02-13T20:46:10.106845571Z" level=error msg="Failed to destroy network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.109120 containerd[1814]: time="2025-02-13T20:46:10.108951133Z" level=error msg="encountered an error cleaning up failed sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.109120 containerd[1814]: time="2025-02-13T20:46:10.109031853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-k47dx,Uid:8e04ade4-b716-4362-a7eb-ded575d07b9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.109385 kubelet[3441]: E0213 20:46:10.109344 3441 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.109437 kubelet[3441]: E0213 20:46:10.109404 3441 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6868ddd855-k47dx" Feb 13 20:46:10.109583 kubelet[3441]: E0213 20:46:10.109426 3441 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6868ddd855-k47dx" Feb 13 20:46:10.110103 kubelet[3441]: E0213 20:46:10.109682 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6868ddd855-k47dx_calico-apiserver(8e04ade4-b716-4362-a7eb-ded575d07b9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6868ddd855-k47dx_calico-apiserver(8e04ade4-b716-4362-a7eb-ded575d07b9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6868ddd855-k47dx" podUID="8e04ade4-b716-4362-a7eb-ded575d07b9c" Feb 13 20:46:10.147250 containerd[1814]: time="2025-02-13T20:46:10.147197543Z" level=error msg="Failed to destroy network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.148003 containerd[1814]: time="2025-02-13T20:46:10.147871984Z" level=error msg="encountered an error cleaning up failed sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.148003 containerd[1814]: time="2025-02-13T20:46:10.147944584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597b588699-srxcs,Uid:29dedd29-2cd0-4e89-b378-f68464171a26,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.148375 kubelet[3441]: E0213 20:46:10.148193 3441 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.148375 kubelet[3441]: E0213 20:46:10.148261 3441 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597b588699-srxcs" Feb 13 20:46:10.148375 kubelet[3441]: E0213 20:46:10.148285 3441 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597b588699-srxcs" Feb 13 20:46:10.148490 kubelet[3441]: E0213 20:46:10.148323 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-597b588699-srxcs_calico-system(29dedd29-2cd0-4e89-b378-f68464171a26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-597b588699-srxcs_calico-system(29dedd29-2cd0-4e89-b378-f68464171a26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-597b588699-srxcs" podUID="29dedd29-2cd0-4e89-b378-f68464171a26" Feb 13 20:46:10.160531 containerd[1814]: time="2025-02-13T20:46:10.160465240Z" level=error msg="Failed to destroy network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.160884 containerd[1814]: time="2025-02-13T20:46:10.160850320Z" level=error msg="encountered an error cleaning up failed sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.160946 containerd[1814]: time="2025-02-13T20:46:10.160919281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-gbjvf,Uid:c48df888-6f83-4248-848b-c107d69d27c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.161315 kubelet[3441]: E0213 20:46:10.161264 3441 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.161384 kubelet[3441]: E0213 20:46:10.161337 3441 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6868ddd855-gbjvf" Feb 13 20:46:10.161384 kubelet[3441]: E0213 20:46:10.161363 3441 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6868ddd855-gbjvf" Feb 13 20:46:10.161435 kubelet[3441]: E0213 20:46:10.161402 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6868ddd855-gbjvf_calico-apiserver(c48df888-6f83-4248-848b-c107d69d27c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6868ddd855-gbjvf_calico-apiserver(c48df888-6f83-4248-848b-c107d69d27c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6868ddd855-gbjvf" podUID="c48df888-6f83-4248-848b-c107d69d27c0" Feb 13 20:46:10.598868 kubelet[3441]: I0213 20:46:10.597537 3441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:10.598975 containerd[1814]: time="2025-02-13T20:46:10.598387484Z" level=info msg="StopPodSandbox for \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\"" Feb 13 20:46:10.598975 containerd[1814]: time="2025-02-13T20:46:10.598551525Z" level=info msg="Ensure that sandbox a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb in task-service has been cleanup successfully" Feb 13 20:46:10.602078 kubelet[3441]: I0213 20:46:10.602003 3441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:10.602669 containerd[1814]: time="2025-02-13T20:46:10.602637650Z" level=info msg="StopPodSandbox for \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\"" Feb 13 20:46:10.603126 containerd[1814]: time="2025-02-13T20:46:10.603102570Z" level=info msg="Ensure that sandbox ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388 in task-service has been cleanup successfully" Feb 13 20:46:10.603891 kubelet[3441]: I0213 20:46:10.603831 3441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:10.605673 containerd[1814]: time="2025-02-13T20:46:10.605539654Z" level=info msg="StopPodSandbox for \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\"" Feb 13 20:46:10.605964 kubelet[3441]: I0213 20:46:10.605745 3441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:10.606534 containerd[1814]: time="2025-02-13T20:46:10.606484615Z" level=info msg="Ensure that sandbox 76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb in task-service has been cleanup successfully" Feb 13 20:46:10.607258 containerd[1814]: time="2025-02-13T20:46:10.607177496Z" level=info msg="StopPodSandbox for \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\"" Feb 13 20:46:10.607806 containerd[1814]: time="2025-02-13T20:46:10.607601856Z" level=info msg="Ensure that sandbox 9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1 in task-service has been cleanup successfully" Feb 13 20:46:10.619244 containerd[1814]: time="2025-02-13T20:46:10.619048071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:46:10.623758 kubelet[3441]: I0213 20:46:10.623100 3441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:10.625194 containerd[1814]: time="2025-02-13T20:46:10.625160239Z" level=info msg="StopPodSandbox for \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\"" Feb 13 20:46:10.625647 containerd[1814]: time="2025-02-13T20:46:10.625620559Z" level=info msg="Ensure that sandbox ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d in task-service has been cleanup successfully" Feb 13 20:46:10.634349 kubelet[3441]: I0213 20:46:10.634297 3441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:10.635269 containerd[1814]: time="2025-02-13T20:46:10.635186852Z" level=info msg="StopPodSandbox for \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\"" Feb 13 20:46:10.637053 containerd[1814]: time="2025-02-13T20:46:10.636655134Z" level=info msg="Ensure that sandbox 746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8 in task-service has been cleanup successfully" Feb 13 20:46:10.685334 containerd[1814]: time="2025-02-13T20:46:10.684907036Z" level=error msg="StopPodSandbox for \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\" failed" error="failed to destroy network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.685494 kubelet[3441]: E0213 20:46:10.685161 3441 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:10.685494 kubelet[3441]: E0213 20:46:10.685218 3441 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb"} Feb 13 20:46:10.685494 kubelet[3441]: E0213 20:46:10.685273 3441 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e04ade4-b716-4362-a7eb-ded575d07b9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:46:10.685494 kubelet[3441]: E0213 20:46:10.685294 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e04ade4-b716-4362-a7eb-ded575d07b9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6868ddd855-k47dx" podUID="8e04ade4-b716-4362-a7eb-ded575d07b9c" Feb 13 20:46:10.693567 containerd[1814]: time="2025-02-13T20:46:10.693236927Z" level=error msg="StopPodSandbox for \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\" failed" error="failed to destroy network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.693802 kubelet[3441]: E0213 20:46:10.693578 3441 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:10.693802 kubelet[3441]: E0213 20:46:10.693622 3441 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d"} Feb 13 20:46:10.693802 kubelet[3441]: E0213 20:46:10.693660 3441 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c48df888-6f83-4248-848b-c107d69d27c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:46:10.693802 kubelet[3441]: E0213 20:46:10.693682 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c48df888-6f83-4248-848b-c107d69d27c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6868ddd855-gbjvf" podUID="c48df888-6f83-4248-848b-c107d69d27c0" Feb 13 20:46:10.709360 containerd[1814]: time="2025-02-13T20:46:10.708855107Z" level=error msg="StopPodSandbox for \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\" failed" error="failed to destroy network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.709497 kubelet[3441]: E0213 20:46:10.709207 3441 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:10.709497 kubelet[3441]: E0213 20:46:10.709260 3441 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1"} Feb 13 20:46:10.709497 kubelet[3441]: E0213 20:46:10.709297 3441 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b4196eb-15cf-4412-9d84-5406140b93bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:46:10.709497 kubelet[3441]: E0213 20:46:10.709326 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b4196eb-15cf-4412-9d84-5406140b93bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-79ckq" podUID="8b4196eb-15cf-4412-9d84-5406140b93bb" Feb 13 20:46:10.715761 containerd[1814]: time="2025-02-13T20:46:10.715440675Z" level=error msg="StopPodSandbox for \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\" failed" error="failed to destroy network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.715871 kubelet[3441]: E0213 20:46:10.715670 3441 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:10.715871 kubelet[3441]: E0213 20:46:10.715716 3441 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388"} Feb 13 20:46:10.715871 kubelet[3441]: E0213 20:46:10.715753 3441 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29dedd29-2cd0-4e89-b378-f68464171a26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:46:10.715871 kubelet[3441]: E0213 20:46:10.715775 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29dedd29-2cd0-4e89-b378-f68464171a26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-597b588699-srxcs" podUID="29dedd29-2cd0-4e89-b378-f68464171a26" Feb 13 20:46:10.718912 containerd[1814]: time="2025-02-13T20:46:10.718720279Z" level=error msg="StopPodSandbox for \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\" failed" error="failed to destroy network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.719689 kubelet[3441]: E0213 20:46:10.718953 3441 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:10.719689 kubelet[3441]: E0213 20:46:10.718991 3441 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8"} Feb 13 20:46:10.719689 kubelet[3441]: E0213 20:46:10.719086 3441 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:46:10.719689 kubelet[3441]: E0213 20:46:10.719113 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-897cn" podUID="c975c533-a1a9-45d1-9ae6-e3e1fc2a3401" Feb 13 20:46:10.727637 containerd[1814]: time="2025-02-13T20:46:10.727588931Z" level=error msg="StopPodSandbox for \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\" failed" error="failed to destroy network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:46:10.727871 kubelet[3441]: E0213 20:46:10.727815 3441 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:10.727871 kubelet[3441]: E0213 20:46:10.727864 3441 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb"} Feb 13 20:46:10.727943 kubelet[3441]: E0213 20:46:10.727894 3441 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93087e5e-d8ec-437a-b934-6999a2c23c2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:46:10.727943 kubelet[3441]: E0213 20:46:10.727915 3441 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93087e5e-d8ec-437a-b934-6999a2c23c2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9vpwd" podUID="93087e5e-d8ec-437a-b934-6999a2c23c2f" Feb 13 20:46:10.823540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1-shm.mount: Deactivated successfully. Feb 13 20:46:14.930030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920175036.mount: Deactivated successfully. Feb 13 20:46:15.230129 containerd[1814]: time="2025-02-13T20:46:15.229977252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:15.233302 containerd[1814]: time="2025-02-13T20:46:15.233048576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 20:46:15.245838 containerd[1814]: time="2025-02-13T20:46:15.245681192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.625356199s" Feb 13 20:46:15.245838 containerd[1814]: time="2025-02-13T20:46:15.245727192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 20:46:15.255043 containerd[1814]: time="2025-02-13T20:46:15.250544278Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:15.255043 containerd[1814]: time="2025-02-13T20:46:15.251246279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:15.263352 containerd[1814]: time="2025-02-13T20:46:15.263315415Z" level=info msg="CreateContainer within sandbox \"1fc08170526ad6307843d0888c58c60effe968564009fdec121aaa5e74ffd0e1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:46:15.304864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189991321.mount: Deactivated successfully. Feb 13 20:46:15.317448 containerd[1814]: time="2025-02-13T20:46:15.317308804Z" level=info msg="CreateContainer within sandbox \"1fc08170526ad6307843d0888c58c60effe968564009fdec121aaa5e74ffd0e1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9210ff6b272d8e5c0bfed05a13a33f1d62be2e18cbcff464b05d3ab0afc2b433\"" Feb 13 20:46:15.319422 containerd[1814]: time="2025-02-13T20:46:15.317990365Z" level=info msg="StartContainer for \"9210ff6b272d8e5c0bfed05a13a33f1d62be2e18cbcff464b05d3ab0afc2b433\"" Feb 13 20:46:15.390807 containerd[1814]: time="2025-02-13T20:46:15.390762659Z" level=info msg="StartContainer for \"9210ff6b272d8e5c0bfed05a13a33f1d62be2e18cbcff464b05d3ab0afc2b433\" returns successfully" Feb 13 20:46:15.478598 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:46:15.478766 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:46:21.331026 kubelet[3441]: I0213 20:46:21.330086 3441 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:21.438999 containerd[1814]: time="2025-02-13T20:46:21.438948665Z" level=info msg="StopPodSandbox for \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\"" Feb 13 20:46:21.522189 kubelet[3441]: I0213 20:46:21.522123 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-njn8q" podStartSLOduration=7.397949082 podStartE2EDuration="22.522085375s" podCreationTimestamp="2025-02-13 20:45:59 +0000 UTC" firstStartedPulling="2025-02-13 20:46:00.12247102 +0000 UTC m=+23.779133964" lastFinishedPulling="2025-02-13 20:46:15.246607313 +0000 UTC m=+38.903270257" observedRunningTime="2025-02-13 20:46:15.670208019 +0000 UTC m=+39.326870963" watchObservedRunningTime="2025-02-13 20:46:21.522085375 +0000 UTC m=+45.178748319" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.516 [INFO][4745] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.516 [INFO][4745] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" iface="eth0" netns="/var/run/netns/cni-e417f07e-8b3d-f793-49d0-559bb4d8d9a7" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.517 [INFO][4745] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" iface="eth0" netns="/var/run/netns/cni-e417f07e-8b3d-f793-49d0-559bb4d8d9a7" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.517 [INFO][4745] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" iface="eth0" netns="/var/run/netns/cni-e417f07e-8b3d-f793-49d0-559bb4d8d9a7" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.517 [INFO][4745] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.517 [INFO][4745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.538 [INFO][4753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.539 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.539 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.548 [WARNING][4753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.548 [INFO][4753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.550 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:21.553612 containerd[1814]: 2025-02-13 20:46:21.552 [INFO][4745] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:21.557322 containerd[1814]: time="2025-02-13T20:46:21.553817537Z" level=info msg="TearDown network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\" successfully" Feb 13 20:46:21.557322 containerd[1814]: time="2025-02-13T20:46:21.553845857Z" level=info msg="StopPodSandbox for \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\" returns successfully" Feb 13 20:46:21.560332 systemd[1]: run-netns-cni\x2de417f07e\x2d8b3d\x2df793\x2d49d0\x2d559bb4d8d9a7.mount: Deactivated successfully. Feb 13 20:46:21.567292 containerd[1814]: time="2025-02-13T20:46:21.566870954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-gbjvf,Uid:c48df888-6f83-4248-848b-c107d69d27c0,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:46:21.733684 systemd-networkd[1375]: calif0d7238e1ef: Link UP Feb 13 20:46:21.735843 systemd-networkd[1375]: calif0d7238e1ef: Gained carrier Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.634 [INFO][4767] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.646 [INFO][4767] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0 calico-apiserver-6868ddd855- calico-apiserver c48df888-6f83-4248-848b-c107d69d27c0 770 0 2025-02-13 20:45:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6868ddd855 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-1c3e1e2868 calico-apiserver-6868ddd855-gbjvf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0d7238e1ef [] []}} ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.646 [INFO][4767] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.674 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" HandleID="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.686 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" HandleID="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-1c3e1e2868", "pod":"calico-apiserver-6868ddd855-gbjvf", "timestamp":"2025-02-13 20:46:21.674655616 +0000 UTC"}, Hostname:"ci-4081.3.1-a-1c3e1e2868", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.686 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.686 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.686 [INFO][4778] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-1c3e1e2868' Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.688 [INFO][4778] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.691 [INFO][4778] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.695 [INFO][4778] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.697 [INFO][4778] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.699 [INFO][4778] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.699 [INFO][4778] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.700 [INFO][4778] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0 Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.705 [INFO][4778] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.719 [INFO][4778] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.65/26] block=192.168.31.64/26 handle="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.719 [INFO][4778] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.65/26] handle="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.719 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:21.765437 containerd[1814]: 2025-02-13 20:46:21.719 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.65/26] IPv6=[] ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" HandleID="k8s-pod-network.161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.765988 containerd[1814]: 2025-02-13 20:46:21.721 [INFO][4767] cni-plugin/k8s.go 386: Populated endpoint ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"c48df888-6f83-4248-848b-c107d69d27c0", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"", Pod:"calico-apiserver-6868ddd855-gbjvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0d7238e1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:21.765988 containerd[1814]: 2025-02-13 20:46:21.722 [INFO][4767] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.65/32] ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.765988 containerd[1814]: 2025-02-13 20:46:21.722 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0d7238e1ef ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.765988 containerd[1814]: 2025-02-13 20:46:21.735 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.765988 containerd[1814]: 2025-02-13 20:46:21.735 [INFO][4767] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"c48df888-6f83-4248-848b-c107d69d27c0", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0", Pod:"calico-apiserver-6868ddd855-gbjvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0d7238e1ef", MAC:"aa:ce:0d:e8:44:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:21.765988 containerd[1814]: 2025-02-13 20:46:21.756 [INFO][4767] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-gbjvf" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:21.794053 containerd[1814]: time="2025-02-13T20:46:21.793920933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:46:21.794053 containerd[1814]: time="2025-02-13T20:46:21.793979293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:46:21.794053 containerd[1814]: time="2025-02-13T20:46:21.793994653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:21.794411 containerd[1814]: time="2025-02-13T20:46:21.794149173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:21.835259 containerd[1814]: time="2025-02-13T20:46:21.835207067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-gbjvf,Uid:c48df888-6f83-4248-848b-c107d69d27c0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0\"" Feb 13 20:46:21.838346 containerd[1814]: time="2025-02-13T20:46:21.838246871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:46:22.118681 kubelet[3441]: I0213 20:46:22.118462 3441 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:22.442984 containerd[1814]: time="2025-02-13T20:46:22.442732228Z" level=info msg="StopPodSandbox for \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\"" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.515 [INFO][4869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.515 [INFO][4869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" iface="eth0" netns="/var/run/netns/cni-d7303d9d-3d13-604a-e85e-0e3b578990fa" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.516 [INFO][4869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" iface="eth0" netns="/var/run/netns/cni-d7303d9d-3d13-604a-e85e-0e3b578990fa" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.516 [INFO][4869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" iface="eth0" netns="/var/run/netns/cni-d7303d9d-3d13-604a-e85e-0e3b578990fa" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.516 [INFO][4869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.516 [INFO][4869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.536 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.536 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.536 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.544 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.544 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.546 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:22.549043 containerd[1814]: 2025-02-13 20:46:22.547 [INFO][4869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:22.551702 containerd[1814]: time="2025-02-13T20:46:22.549138648Z" level=info msg="TearDown network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\" successfully" Feb 13 20:46:22.551702 containerd[1814]: time="2025-02-13T20:46:22.549165208Z" level=info msg="StopPodSandbox for \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\" returns successfully" Feb 13 20:46:22.553409 containerd[1814]: time="2025-02-13T20:46:22.551935892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79ckq,Uid:8b4196eb-15cf-4412-9d84-5406140b93bb,Namespace:kube-system,Attempt:1,}" Feb 13 20:46:22.553633 systemd[1]: run-netns-cni\x2dd7303d9d\x2d3d13\x2d604a\x2de85e\x2d0e3b578990fa.mount: Deactivated successfully. Feb 13 20:46:22.774971 systemd-networkd[1375]: cali3251d65bd7f: Link UP Feb 13 20:46:22.777550 systemd-networkd[1375]: cali3251d65bd7f: Gained carrier Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.636 [INFO][4887] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.653 [INFO][4887] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0 coredns-7db6d8ff4d- kube-system 8b4196eb-15cf-4412-9d84-5406140b93bb 784 0 2025-02-13 20:45:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-1c3e1e2868 coredns-7db6d8ff4d-79ckq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3251d65bd7f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.653 [INFO][4887] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.713 [INFO][4899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" HandleID="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.729 [INFO][4899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" HandleID="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d680), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-1c3e1e2868", "pod":"coredns-7db6d8ff4d-79ckq", "timestamp":"2025-02-13 20:46:22.713287544 +0000 UTC"}, Hostname:"ci-4081.3.1-a-1c3e1e2868", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.729 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.729 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.729 [INFO][4899] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-1c3e1e2868' Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.731 [INFO][4899] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.737 [INFO][4899] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.743 [INFO][4899] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.745 [INFO][4899] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.748 [INFO][4899] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.748 [INFO][4899] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.749 [INFO][4899] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274 Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.760 [INFO][4899] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.768 [INFO][4899] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.66/26] block=192.168.31.64/26 handle="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.768 [INFO][4899] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.66/26] handle="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.768 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:22.796117 containerd[1814]: 2025-02-13 20:46:22.768 [INFO][4899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.66/26] IPv6=[] ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" HandleID="k8s-pod-network.5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.796682 containerd[1814]: 2025-02-13 20:46:22.770 [INFO][4887] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8b4196eb-15cf-4412-9d84-5406140b93bb", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"", Pod:"coredns-7db6d8ff4d-79ckq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3251d65bd7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:22.796682 containerd[1814]: 2025-02-13 20:46:22.770 [INFO][4887] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.66/32] ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.796682 containerd[1814]: 2025-02-13 20:46:22.770 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3251d65bd7f ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.796682 containerd[1814]: 2025-02-13 20:46:22.777 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.796682 containerd[1814]: 2025-02-13 20:46:22.778 [INFO][4887] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8b4196eb-15cf-4412-9d84-5406140b93bb", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274", Pod:"coredns-7db6d8ff4d-79ckq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3251d65bd7f", MAC:"a2:09:12:b2:1f:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:22.796682 containerd[1814]: 2025-02-13 20:46:22.793 [INFO][4887] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79ckq" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:22.822330 containerd[1814]: time="2025-02-13T20:46:22.821853647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:46:22.822330 containerd[1814]: time="2025-02-13T20:46:22.821914167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:46:22.822330 containerd[1814]: time="2025-02-13T20:46:22.821936207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:22.822330 containerd[1814]: time="2025-02-13T20:46:22.822069967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:22.878266 containerd[1814]: time="2025-02-13T20:46:22.878170321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79ckq,Uid:8b4196eb-15cf-4412-9d84-5406140b93bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274\"" Feb 13 20:46:22.882645 containerd[1814]: time="2025-02-13T20:46:22.882543047Z" level=info msg="CreateContainer within sandbox \"5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:46:22.938265 containerd[1814]: time="2025-02-13T20:46:22.938128360Z" level=info msg="CreateContainer within sandbox \"5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f9d3aef4e11fefaab084421073ba9d5e351583aa9c496d626d8552408f81e67\"" Feb 13 20:46:22.941942 containerd[1814]: time="2025-02-13T20:46:22.940987564Z" level=info msg="StartContainer for \"5f9d3aef4e11fefaab084421073ba9d5e351583aa9c496d626d8552408f81e67\"" Feb 13 20:46:22.999662 containerd[1814]: time="2025-02-13T20:46:22.999612761Z" level=info msg="StartContainer for \"5f9d3aef4e11fefaab084421073ba9d5e351583aa9c496d626d8552408f81e67\" returns successfully" Feb 13 20:46:23.243039 kernel: bpftool[5029]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:46:23.440623 containerd[1814]: time="2025-02-13T20:46:23.439575181Z" level=info msg="StopPodSandbox for \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\"" Feb 13 20:46:23.441312 containerd[1814]: time="2025-02-13T20:46:23.441030423Z" level=info msg="StopPodSandbox for \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\"" Feb 13 20:46:23.441724 containerd[1814]: time="2025-02-13T20:46:23.441700864Z" level=info msg="StopPodSandbox for \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\"" Feb 13 20:46:23.480940 systemd-networkd[1375]: vxlan.calico: Link UP Feb 13 20:46:23.481324 systemd-networkd[1375]: vxlan.calico: Gained carrier Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.561 [INFO][5088] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.563 [INFO][5088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" iface="eth0" netns="/var/run/netns/cni-3b79850f-b447-5461-4eb4-7df02d3f8689" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.564 [INFO][5088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" iface="eth0" netns="/var/run/netns/cni-3b79850f-b447-5461-4eb4-7df02d3f8689" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.564 [INFO][5088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" iface="eth0" netns="/var/run/netns/cni-3b79850f-b447-5461-4eb4-7df02d3f8689" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.564 [INFO][5088] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.564 [INFO][5088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.615 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.616 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.616 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.633 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.633 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.638 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:23.661283 containerd[1814]: 2025-02-13 20:46:23.645 [INFO][5088] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:23.667730 containerd[1814]: time="2025-02-13T20:46:23.666881681Z" level=info msg="TearDown network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\" successfully" Feb 13 20:46:23.667730 containerd[1814]: time="2025-02-13T20:46:23.666919641Z" level=info msg="StopPodSandbox for \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\" returns successfully" Feb 13 20:46:23.673897 systemd[1]: run-netns-cni\x2d3b79850f\x2db447\x2d5461\x2d4eb4\x2d7df02d3f8689.mount: Deactivated successfully. Feb 13 20:46:23.679028 containerd[1814]: time="2025-02-13T20:46:23.677204774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597b588699-srxcs,Uid:29dedd29-2cd0-4e89-b378-f68464171a26,Namespace:calico-system,Attempt:1,}" Feb 13 20:46:23.713858 kubelet[3441]: I0213 20:46:23.713655 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-79ckq" podStartSLOduration=31.713617022 podStartE2EDuration="31.713617022s" podCreationTimestamp="2025-02-13 20:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:46:23.711235219 +0000 UTC m=+47.367898163" watchObservedRunningTime="2025-02-13 20:46:23.713617022 +0000 UTC m=+47.370279926" Feb 13 20:46:23.801824 systemd-networkd[1375]: calif0d7238e1ef: Gained IPv6LL Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.620 [INFO][5083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.622 [INFO][5083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" iface="eth0" netns="/var/run/netns/cni-5361eba8-6309-3321-7785-b51f7571cc53" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.624 [INFO][5083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" iface="eth0" netns="/var/run/netns/cni-5361eba8-6309-3321-7785-b51f7571cc53" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.625 [INFO][5083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" iface="eth0" netns="/var/run/netns/cni-5361eba8-6309-3321-7785-b51f7571cc53" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.625 [INFO][5083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.625 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.760 [INFO][5133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.761 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.761 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.805 [WARNING][5133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.805 [INFO][5133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.811 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:23.824316 containerd[1814]: 2025-02-13 20:46:23.816 [INFO][5083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:23.826572 containerd[1814]: time="2025-02-13T20:46:23.826485051Z" level=info msg="TearDown network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\" successfully" Feb 13 20:46:23.826836 containerd[1814]: time="2025-02-13T20:46:23.826802251Z" level=info msg="StopPodSandbox for \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\" returns successfully" Feb 13 20:46:23.827922 containerd[1814]: time="2025-02-13T20:46:23.827736652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-897cn,Uid:c975c533-a1a9-45d1-9ae6-e3e1fc2a3401,Namespace:calico-system,Attempt:1,}" Feb 13 20:46:23.834627 systemd[1]: run-netns-cni\x2d5361eba8\x2d6309\x2d3321\x2d7785\x2db51f7571cc53.mount: Deactivated successfully. Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.648 [INFO][5087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.648 [INFO][5087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" iface="eth0" netns="/var/run/netns/cni-87f34134-a8ab-e49c-7588-13aa4cb75b46" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.649 [INFO][5087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" iface="eth0" netns="/var/run/netns/cni-87f34134-a8ab-e49c-7588-13aa4cb75b46" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.651 [INFO][5087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" iface="eth0" netns="/var/run/netns/cni-87f34134-a8ab-e49c-7588-13aa4cb75b46" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.651 [INFO][5087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.652 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.812 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.814 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.814 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.845 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.846 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.855 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:23.862055 containerd[1814]: 2025-02-13 20:46:23.860 [INFO][5087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:23.863396 containerd[1814]: time="2025-02-13T20:46:23.862659098Z" level=info msg="TearDown network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\" successfully" Feb 13 20:46:23.863396 containerd[1814]: time="2025-02-13T20:46:23.862964299Z" level=info msg="StopPodSandbox for \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\" returns successfully" Feb 13 20:46:23.864161 containerd[1814]: time="2025-02-13T20:46:23.864096940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-k47dx,Uid:8e04ade4-b716-4362-a7eb-ded575d07b9c,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:46:24.110382 systemd-networkd[1375]: caliab663734c77: Link UP Feb 13 20:46:24.111914 systemd-networkd[1375]: caliab663734c77: Gained carrier Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:23.899 [INFO][5144] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0 calico-kube-controllers-597b588699- calico-system 29dedd29-2cd0-4e89-b378-f68464171a26 795 0 2025-02-13 20:45:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:597b588699 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-a-1c3e1e2868 calico-kube-controllers-597b588699-srxcs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab663734c77 [] []}} ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:23.899 [INFO][5144] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.020 [INFO][5186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" HandleID="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.044 [INFO][5186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" HandleID="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-1c3e1e2868", "pod":"calico-kube-controllers-597b588699-srxcs", "timestamp":"2025-02-13 20:46:24.020097266 +0000 UTC"}, Hostname:"ci-4081.3.1-a-1c3e1e2868", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.044 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.044 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.044 [INFO][5186] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-1c3e1e2868' Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.046 [INFO][5186] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.055 [INFO][5186] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.065 [INFO][5186] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.068 [INFO][5186] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.072 [INFO][5186] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.073 [INFO][5186] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.076 [INFO][5186] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2 Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.082 [INFO][5186] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.095 [INFO][5186] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.67/26] block=192.168.31.64/26 handle="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.096 [INFO][5186] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.67/26] handle="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.096 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:24.145447 containerd[1814]: 2025-02-13 20:46:24.096 [INFO][5186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.67/26] IPv6=[] ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" HandleID="k8s-pod-network.b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:24.146052 containerd[1814]: 2025-02-13 20:46:24.102 [INFO][5144] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0", GenerateName:"calico-kube-controllers-597b588699-", Namespace:"calico-system", SelfLink:"", UID:"29dedd29-2cd0-4e89-b378-f68464171a26", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597b588699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"", Pod:"calico-kube-controllers-597b588699-srxcs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab663734c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.146052 containerd[1814]: 2025-02-13 20:46:24.103 [INFO][5144] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.67/32] ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:24.146052 containerd[1814]: 2025-02-13 20:46:24.103 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab663734c77 ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:24.146052 containerd[1814]: 2025-02-13 20:46:24.112 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:24.146052 containerd[1814]: 2025-02-13 20:46:24.116 [INFO][5144] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0", GenerateName:"calico-kube-controllers-597b588699-", Namespace:"calico-system", SelfLink:"", UID:"29dedd29-2cd0-4e89-b378-f68464171a26", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597b588699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2", Pod:"calico-kube-controllers-597b588699-srxcs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab663734c77", MAC:"0e:37:ea:68:c5:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.146052 containerd[1814]: 2025-02-13 20:46:24.141 [INFO][5144] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2" Namespace="calico-system" Pod="calico-kube-controllers-597b588699-srxcs" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:24.225789 containerd[1814]: time="2025-02-13T20:46:24.225364656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:46:24.225789 containerd[1814]: time="2025-02-13T20:46:24.225427816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:46:24.225789 containerd[1814]: time="2025-02-13T20:46:24.225450936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:24.225789 containerd[1814]: time="2025-02-13T20:46:24.225589017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:24.245295 systemd-networkd[1375]: cali7bf7d6fdd87: Link UP Feb 13 20:46:24.246109 systemd-networkd[1375]: cali7bf7d6fdd87: Gained carrier Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.059 [INFO][5216] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0 csi-node-driver- calico-system c975c533-a1a9-45d1-9ae6-e3e1fc2a3401 796 0 2025-02-13 20:45:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-a-1c3e1e2868 csi-node-driver-897cn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7bf7d6fdd87 [] []}} ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.059 [INFO][5216] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.147 [INFO][5230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" HandleID="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.166 [INFO][5230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" HandleID="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004cb020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-1c3e1e2868", "pod":"csi-node-driver-897cn", "timestamp":"2025-02-13 20:46:24.147025873 +0000 UTC"}, Hostname:"ci-4081.3.1-a-1c3e1e2868", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.166 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.166 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.166 [INFO][5230] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-1c3e1e2868' Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.168 [INFO][5230] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.173 [INFO][5230] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.185 [INFO][5230] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.190 [INFO][5230] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.193 [INFO][5230] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.193 [INFO][5230] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.195 [INFO][5230] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.207 [INFO][5230] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.229 [INFO][5230] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.68/26] block=192.168.31.64/26 handle="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.230 [INFO][5230] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.68/26] handle="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.230 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:24.307368 containerd[1814]: 2025-02-13 20:46:24.230 [INFO][5230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.68/26] IPv6=[] ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" HandleID="k8s-pod-network.c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:24.307997 containerd[1814]: 2025-02-13 20:46:24.237 [INFO][5216] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"", Pod:"csi-node-driver-897cn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7bf7d6fdd87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.307997 containerd[1814]: 2025-02-13 20:46:24.237 [INFO][5216] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.68/32] ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:24.307997 containerd[1814]: 2025-02-13 20:46:24.237 [INFO][5216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bf7d6fdd87 ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:24.307997 containerd[1814]: 2025-02-13 20:46:24.246 [INFO][5216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:24.307997 containerd[1814]: 2025-02-13 20:46:24.247 [INFO][5216] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b", Pod:"csi-node-driver-897cn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7bf7d6fdd87", MAC:"b2:48:a2:69:5b:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.307997 containerd[1814]: 2025-02-13 20:46:24.292 [INFO][5216] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b" Namespace="calico-system" Pod="csi-node-driver-897cn" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:24.379343 containerd[1814]: time="2025-02-13T20:46:24.375305094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:46:24.379343 containerd[1814]: time="2025-02-13T20:46:24.376035935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:46:24.379343 containerd[1814]: time="2025-02-13T20:46:24.376055535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:24.379343 containerd[1814]: time="2025-02-13T20:46:24.376161455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:24.383644 systemd[1]: run-netns-cni\x2d87f34134\x2da8ab\x2de49c\x2d7588\x2d13aa4cb75b46.mount: Deactivated successfully. Feb 13 20:46:24.405427 systemd-networkd[1375]: cali1084bdaee82: Link UP Feb 13 20:46:24.406049 systemd-networkd[1375]: cali1084bdaee82: Gained carrier Feb 13 20:46:24.438202 containerd[1814]: time="2025-02-13T20:46:24.438144617Z" level=info msg="StopPodSandbox for \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\"" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.065 [INFO][5199] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0 calico-apiserver-6868ddd855- calico-apiserver 8e04ade4-b716-4362-a7eb-ded575d07b9c 797 0 2025-02-13 20:45:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6868ddd855 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-1c3e1e2868 calico-apiserver-6868ddd855-k47dx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1084bdaee82 [] []}} ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.067 [INFO][5199] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.172 [INFO][5229] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" HandleID="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.207 [INFO][5229] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" HandleID="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000408d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-1c3e1e2868", "pod":"calico-apiserver-6868ddd855-k47dx", "timestamp":"2025-02-13 20:46:24.172504627 +0000 UTC"}, Hostname:"ci-4081.3.1-a-1c3e1e2868", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.209 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.230 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.231 [INFO][5229] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-1c3e1e2868' Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.236 [INFO][5229] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.248 [INFO][5229] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.289 [INFO][5229] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.302 [INFO][5229] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.319 [INFO][5229] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.319 [INFO][5229] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.323 [INFO][5229] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80 Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.353 [INFO][5229] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.383 [INFO][5229] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.69/26] block=192.168.31.64/26 handle="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.385 [INFO][5229] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.69/26] handle="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.389 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:24.462857 containerd[1814]: 2025-02-13 20:46:24.389 [INFO][5229] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.69/26] IPv6=[] ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" HandleID="k8s-pod-network.4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:24.463747 containerd[1814]: 2025-02-13 20:46:24.401 [INFO][5199] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e04ade4-b716-4362-a7eb-ded575d07b9c", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"", Pod:"calico-apiserver-6868ddd855-k47dx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1084bdaee82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.463747 containerd[1814]: 2025-02-13 20:46:24.401 [INFO][5199] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.69/32] ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:24.463747 containerd[1814]: 2025-02-13 20:46:24.401 [INFO][5199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1084bdaee82 ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:24.463747 containerd[1814]: 2025-02-13 20:46:24.406 [INFO][5199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:24.463747 containerd[1814]: 2025-02-13 20:46:24.416 [INFO][5199] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e04ade4-b716-4362-a7eb-ded575d07b9c", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80", Pod:"calico-apiserver-6868ddd855-k47dx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1084bdaee82", MAC:"a6:ae:54:8b:c4:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.463747 containerd[1814]: 2025-02-13 20:46:24.456 [INFO][5199] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80" Namespace="calico-apiserver" Pod="calico-apiserver-6868ddd855-k47dx" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:24.497966 containerd[1814]: time="2025-02-13T20:46:24.497566135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597b588699-srxcs,Uid:29dedd29-2cd0-4e89-b378-f68464171a26,Namespace:calico-system,Attempt:1,} returns sandbox id \"b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2\"" Feb 13 20:46:24.535609 containerd[1814]: time="2025-02-13T20:46:24.535567465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-897cn,Uid:c975c533-a1a9-45d1-9ae6-e3e1fc2a3401,Namespace:calico-system,Attempt:1,} returns sandbox id \"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b\"" Feb 13 20:46:24.544677 containerd[1814]: time="2025-02-13T20:46:24.544196676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:46:24.544677 containerd[1814]: time="2025-02-13T20:46:24.544277477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:46:24.544677 containerd[1814]: time="2025-02-13T20:46:24.544294197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:24.544677 containerd[1814]: time="2025-02-13T20:46:24.544412957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:24.569223 systemd-networkd[1375]: cali3251d65bd7f: Gained IPv6LL Feb 13 20:46:24.574294 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Feb 13 20:46:24.627360 containerd[1814]: time="2025-02-13T20:46:24.627309266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6868ddd855-k47dx,Uid:8e04ade4-b716-4362-a7eb-ded575d07b9c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80\"" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.601 [INFO][5368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.602 [INFO][5368] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" iface="eth0" netns="/var/run/netns/cni-7fb90708-14ae-f6be-6b0d-4416103d8e2e" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.602 [INFO][5368] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" iface="eth0" netns="/var/run/netns/cni-7fb90708-14ae-f6be-6b0d-4416103d8e2e" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.602 [INFO][5368] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" iface="eth0" netns="/var/run/netns/cni-7fb90708-14ae-f6be-6b0d-4416103d8e2e" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.602 [INFO][5368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.602 [INFO][5368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.636 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.637 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.637 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.649 [WARNING][5418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.649 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.651 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:24.654653 containerd[1814]: 2025-02-13 20:46:24.653 [INFO][5368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:24.655567 containerd[1814]: time="2025-02-13T20:46:24.655495063Z" level=info msg="TearDown network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\" successfully" Feb 13 20:46:24.655745 containerd[1814]: time="2025-02-13T20:46:24.655677343Z" level=info msg="StopPodSandbox for \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\" returns successfully" Feb 13 20:46:24.657096 containerd[1814]: time="2025-02-13T20:46:24.656695825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vpwd,Uid:93087e5e-d8ec-437a-b934-6999a2c23c2f,Namespace:kube-system,Attempt:1,}" Feb 13 20:46:24.907144 systemd-networkd[1375]: cali298b6ef87b0: Link UP Feb 13 20:46:24.909227 systemd-networkd[1375]: cali298b6ef87b0: Gained carrier Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.767 [INFO][5437] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0 coredns-7db6d8ff4d- kube-system 93087e5e-d8ec-437a-b934-6999a2c23c2f 820 0 2025-02-13 20:45:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-1c3e1e2868 coredns-7db6d8ff4d-9vpwd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali298b6ef87b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.767 [INFO][5437] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.823 [INFO][5443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" HandleID="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.837 [INFO][5443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" HandleID="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003196c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-1c3e1e2868", "pod":"coredns-7db6d8ff4d-9vpwd", "timestamp":"2025-02-13 20:46:24.823027124 +0000 UTC"}, Hostname:"ci-4081.3.1-a-1c3e1e2868", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.839 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.840 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.840 [INFO][5443] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-1c3e1e2868' Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.843 [INFO][5443] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.850 [INFO][5443] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.859 [INFO][5443] ipam/ipam.go 489: Trying affinity for 192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.862 [INFO][5443] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.868 [INFO][5443] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.868 [INFO][5443] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.871 [INFO][5443] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687 Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.880 [INFO][5443] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.894 [INFO][5443] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.70/26] block=192.168.31.64/26 handle="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.894 [INFO][5443] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.70/26] handle="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" host="ci-4081.3.1-a-1c3e1e2868" Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.894 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:24.931405 containerd[1814]: 2025-02-13 20:46:24.894 [INFO][5443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.70/26] IPv6=[] ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" HandleID="k8s-pod-network.4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.934070 containerd[1814]: 2025-02-13 20:46:24.900 [INFO][5437] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93087e5e-d8ec-437a-b934-6999a2c23c2f", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"", Pod:"coredns-7db6d8ff4d-9vpwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298b6ef87b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.934070 containerd[1814]: 2025-02-13 20:46:24.900 [INFO][5437] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.70/32] ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.934070 containerd[1814]: 2025-02-13 20:46:24.900 [INFO][5437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali298b6ef87b0 ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.934070 containerd[1814]: 2025-02-13 20:46:24.910 [INFO][5437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:24.934070 containerd[1814]: 2025-02-13 20:46:24.911 [INFO][5437] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93087e5e-d8ec-437a-b934-6999a2c23c2f", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687", Pod:"coredns-7db6d8ff4d-9vpwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298b6ef87b0", MAC:"9a:9d:6d:11:21:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:24.934070 containerd[1814]: 2025-02-13 20:46:24.929 [INFO][5437] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9vpwd" WorkloadEndpoint="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:25.232942 containerd[1814]: time="2025-02-13T20:46:25.232628583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:46:25.232942 containerd[1814]: time="2025-02-13T20:46:25.232691863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:46:25.232942 containerd[1814]: time="2025-02-13T20:46:25.232707664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:25.232942 containerd[1814]: time="2025-02-13T20:46:25.232800984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:46:25.277648 containerd[1814]: time="2025-02-13T20:46:25.277520003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vpwd,Uid:93087e5e-d8ec-437a-b934-6999a2c23c2f,Namespace:kube-system,Attempt:1,} returns sandbox id \"4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687\"" Feb 13 20:46:25.292066 containerd[1814]: time="2025-02-13T20:46:25.290718340Z" level=info msg="CreateContainer within sandbox \"4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:46:25.303280 containerd[1814]: time="2025-02-13T20:46:25.303223596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:25.314545 containerd[1814]: time="2025-02-13T20:46:25.314486771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 20:46:25.314684 containerd[1814]: time="2025-02-13T20:46:25.314606531Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:25.331210 containerd[1814]: time="2025-02-13T20:46:25.331158873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.492871042s" Feb 13 20:46:25.331210 containerd[1814]: time="2025-02-13T20:46:25.331207953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 20:46:25.331449 containerd[1814]: time="2025-02-13T20:46:25.331339153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:25.333403 containerd[1814]: time="2025-02-13T20:46:25.333268716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:46:25.335282 containerd[1814]: time="2025-02-13T20:46:25.335254479Z" level=info msg="CreateContainer within sandbox \"161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:46:25.364303 containerd[1814]: time="2025-02-13T20:46:25.364255437Z" level=info msg="CreateContainer within sandbox \"4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12c980230d43100886790fbbc022baf0f0741b80a6599c2a37d0b2ae62340434\"" Feb 13 20:46:25.369717 systemd[1]: run-netns-cni\x2d7fb90708\x2d14ae\x2df6be\x2d6b0d\x2d4416103d8e2e.mount: Deactivated successfully. Feb 13 20:46:25.375359 containerd[1814]: time="2025-02-13T20:46:25.374538850Z" level=info msg="StartContainer for \"12c980230d43100886790fbbc022baf0f0741b80a6599c2a37d0b2ae62340434\"" Feb 13 20:46:25.404585 containerd[1814]: time="2025-02-13T20:46:25.404450330Z" level=info msg="CreateContainer within sandbox \"161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce958e26e16931f29d70c68ede1cb6ca74c92c54370efb65c99c0404678dfed1\"" Feb 13 20:46:25.407043 containerd[1814]: time="2025-02-13T20:46:25.406703493Z" level=info msg="StartContainer for \"ce958e26e16931f29d70c68ede1cb6ca74c92c54370efb65c99c0404678dfed1\"" Feb 13 20:46:25.458515 containerd[1814]: time="2025-02-13T20:46:25.458436201Z" level=info msg="StartContainer for \"12c980230d43100886790fbbc022baf0f0741b80a6599c2a37d0b2ae62340434\" returns successfully" Feb 13 20:46:25.513608 containerd[1814]: time="2025-02-13T20:46:25.513469953Z" level=info msg="StartContainer for \"ce958e26e16931f29d70c68ede1cb6ca74c92c54370efb65c99c0404678dfed1\" returns successfully" Feb 13 20:46:25.721219 systemd-networkd[1375]: cali1084bdaee82: Gained IPv6LL Feb 13 20:46:25.739090 kubelet[3441]: I0213 20:46:25.738772 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6868ddd855-gbjvf" podStartSLOduration=23.242809004 podStartE2EDuration="26.73875321s" podCreationTimestamp="2025-02-13 20:45:59 +0000 UTC" firstStartedPulling="2025-02-13 20:46:21.836697069 +0000 UTC m=+45.493360013" lastFinishedPulling="2025-02-13 20:46:25.332641275 +0000 UTC m=+48.989304219" observedRunningTime="2025-02-13 20:46:25.73820921 +0000 UTC m=+49.394872154" watchObservedRunningTime="2025-02-13 20:46:25.73875321 +0000 UTC m=+49.395416154" Feb 13 20:46:25.767218 kubelet[3441]: I0213 20:46:25.766802 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9vpwd" podStartSLOduration=33.766781127 podStartE2EDuration="33.766781127s" podCreationTimestamp="2025-02-13 20:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:46:25.765518165 +0000 UTC m=+49.422181109" watchObservedRunningTime="2025-02-13 20:46:25.766781127 +0000 UTC m=+49.423444111" Feb 13 20:46:25.785134 systemd-networkd[1375]: cali7bf7d6fdd87: Gained IPv6LL Feb 13 20:46:25.914132 systemd-networkd[1375]: caliab663734c77: Gained IPv6LL Feb 13 20:46:26.681318 systemd-networkd[1375]: cali298b6ef87b0: Gained IPv6LL Feb 13 20:46:26.719045 kubelet[3441]: I0213 20:46:26.717878 3441 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:27.283359 containerd[1814]: time="2025-02-13T20:46:27.283304254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:27.285860 containerd[1814]: time="2025-02-13T20:46:27.285823858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 20:46:27.291181 containerd[1814]: time="2025-02-13T20:46:27.291123585Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:27.300201 containerd[1814]: time="2025-02-13T20:46:27.300142677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:27.301024 containerd[1814]: time="2025-02-13T20:46:27.300877438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.967565401s" Feb 13 20:46:27.301024 containerd[1814]: time="2025-02-13T20:46:27.300914558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 20:46:27.302720 containerd[1814]: time="2025-02-13T20:46:27.302517640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:46:27.317474 containerd[1814]: time="2025-02-13T20:46:27.317431739Z" level=info msg="CreateContainer within sandbox \"b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:46:27.364442 containerd[1814]: time="2025-02-13T20:46:27.364354002Z" level=info msg="CreateContainer within sandbox \"b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"710e5a1681ca913f192873be73b694bb47f320ab0853910a257c1650f9a258ec\"" Feb 13 20:46:27.365138 containerd[1814]: time="2025-02-13T20:46:27.365108283Z" level=info msg="StartContainer for \"710e5a1681ca913f192873be73b694bb47f320ab0853910a257c1650f9a258ec\"" Feb 13 20:46:27.429630 containerd[1814]: time="2025-02-13T20:46:27.429570328Z" level=info msg="StartContainer for \"710e5a1681ca913f192873be73b694bb47f320ab0853910a257c1650f9a258ec\" returns successfully" Feb 13 20:46:28.749270 systemd[1]: run-containerd-runc-k8s.io-710e5a1681ca913f192873be73b694bb47f320ab0853910a257c1650f9a258ec-runc.Z9mnU3.mount: Deactivated successfully. Feb 13 20:46:28.785147 kubelet[3441]: I0213 20:46:28.785078 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-597b588699-srxcs" podStartSLOduration=26.99455036 podStartE2EDuration="29.785057686s" podCreationTimestamp="2025-02-13 20:45:59 +0000 UTC" firstStartedPulling="2025-02-13 20:46:24.511285193 +0000 UTC m=+48.167948137" lastFinishedPulling="2025-02-13 20:46:27.301792519 +0000 UTC m=+50.958455463" observedRunningTime="2025-02-13 20:46:27.746229628 +0000 UTC m=+51.402892572" watchObservedRunningTime="2025-02-13 20:46:28.785057686 +0000 UTC m=+52.441720630" Feb 13 20:46:29.068589 containerd[1814]: time="2025-02-13T20:46:29.068452461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:29.071671 containerd[1814]: time="2025-02-13T20:46:29.071434745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 20:46:29.076387 containerd[1814]: time="2025-02-13T20:46:29.076349392Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:29.083257 containerd[1814]: time="2025-02-13T20:46:29.083176601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:29.084168 containerd[1814]: time="2025-02-13T20:46:29.084040642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.781487082s" Feb 13 20:46:29.084168 containerd[1814]: time="2025-02-13T20:46:29.084075122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 20:46:29.086498 containerd[1814]: time="2025-02-13T20:46:29.086271645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:46:29.087732 containerd[1814]: time="2025-02-13T20:46:29.087612287Z" level=info msg="CreateContainer within sandbox \"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:46:29.130811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545820176.mount: Deactivated successfully. Feb 13 20:46:29.145807 containerd[1814]: time="2025-02-13T20:46:29.145738564Z" level=info msg="CreateContainer within sandbox \"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3b0124fad787afd93cc5b5686b2da64cd736a544bc6e2b2bcef84db41a82c9d8\"" Feb 13 20:46:29.146869 containerd[1814]: time="2025-02-13T20:46:29.146677925Z" level=info msg="StartContainer for \"3b0124fad787afd93cc5b5686b2da64cd736a544bc6e2b2bcef84db41a82c9d8\"" Feb 13 20:46:29.205117 containerd[1814]: time="2025-02-13T20:46:29.205045883Z" level=info msg="StartContainer for \"3b0124fad787afd93cc5b5686b2da64cd736a544bc6e2b2bcef84db41a82c9d8\" returns successfully" Feb 13 20:46:29.475269 containerd[1814]: time="2025-02-13T20:46:29.475202001Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:29.479146 containerd[1814]: time="2025-02-13T20:46:29.478593445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:46:29.480803 containerd[1814]: time="2025-02-13T20:46:29.480765288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 394.452723ms" Feb 13 20:46:29.480875 containerd[1814]: time="2025-02-13T20:46:29.480806848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 20:46:29.484196 containerd[1814]: time="2025-02-13T20:46:29.483308372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:46:29.485597 containerd[1814]: time="2025-02-13T20:46:29.485564135Z" level=info msg="CreateContainer within sandbox \"4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:46:29.537977 containerd[1814]: time="2025-02-13T20:46:29.537853164Z" level=info msg="CreateContainer within sandbox \"4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b96383794348b53d6920d81cb97d2539a533231cf0c5b7daf045c7a9bf01e38c\"" Feb 13 20:46:29.539901 containerd[1814]: time="2025-02-13T20:46:29.538533645Z" level=info msg="StartContainer for \"b96383794348b53d6920d81cb97d2539a533231cf0c5b7daf045c7a9bf01e38c\"" Feb 13 20:46:29.601666 containerd[1814]: time="2025-02-13T20:46:29.601617968Z" level=info msg="StartContainer for \"b96383794348b53d6920d81cb97d2539a533231cf0c5b7daf045c7a9bf01e38c\" returns successfully" Feb 13 20:46:29.752937 kubelet[3441]: I0213 20:46:29.752792 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6868ddd855-k47dx" podStartSLOduration=25.900849309 podStartE2EDuration="30.752767169s" podCreationTimestamp="2025-02-13 20:45:59 +0000 UTC" firstStartedPulling="2025-02-13 20:46:24.629784109 +0000 UTC m=+48.286447013" lastFinishedPulling="2025-02-13 20:46:29.481701929 +0000 UTC m=+53.138364873" observedRunningTime="2025-02-13 20:46:29.752177288 +0000 UTC m=+53.408840232" watchObservedRunningTime="2025-02-13 20:46:29.752767169 +0000 UTC m=+53.409430113" Feb 13 20:46:30.734691 kubelet[3441]: I0213 20:46:30.734627 3441 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:30.929410 containerd[1814]: time="2025-02-13T20:46:30.928765448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:30.932960 containerd[1814]: time="2025-02-13T20:46:30.932849974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 20:46:30.940516 containerd[1814]: time="2025-02-13T20:46:30.939229422Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:30.944370 containerd[1814]: time="2025-02-13T20:46:30.944321589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:46:30.945026 containerd[1814]: time="2025-02-13T20:46:30.944967550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.461619818s" Feb 13 20:46:30.945026 containerd[1814]: time="2025-02-13T20:46:30.945004950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 20:46:30.952478 containerd[1814]: time="2025-02-13T20:46:30.952351840Z" level=info msg="CreateContainer within sandbox \"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:46:31.001514 containerd[1814]: time="2025-02-13T20:46:30.999571702Z" level=info msg="CreateContainer within sandbox \"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7ed03ea7202355566ead57805ba4570cdff96d555be77373979a74eeef8dd8ce\"" Feb 13 20:46:31.002317 containerd[1814]: time="2025-02-13T20:46:31.002279746Z" level=info msg="StartContainer for \"7ed03ea7202355566ead57805ba4570cdff96d555be77373979a74eeef8dd8ce\"" Feb 13 20:46:31.218768 containerd[1814]: time="2025-02-13T20:46:31.218640993Z" level=info msg="StartContainer for \"7ed03ea7202355566ead57805ba4570cdff96d555be77373979a74eeef8dd8ce\" returns successfully" Feb 13 20:46:31.771237 kubelet[3441]: I0213 20:46:31.771111 3441 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:46:31.771237 kubelet[3441]: I0213 20:46:31.771149 3441 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:46:36.717745 containerd[1814]: time="2025-02-13T20:46:36.717567103Z" level=info msg="StopPodSandbox for \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\"" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.760 [WARNING][5808] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e04ade4-b716-4362-a7eb-ded575d07b9c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80", Pod:"calico-apiserver-6868ddd855-k47dx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1084bdaee82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.761 [INFO][5808] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.761 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" iface="eth0" netns="" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.761 [INFO][5808] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.761 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.781 [INFO][5814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.781 [INFO][5814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.781 [INFO][5814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.789 [WARNING][5814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.789 [INFO][5814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.791 [INFO][5814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:36.794247 containerd[1814]: 2025-02-13 20:46:36.792 [INFO][5808] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.794247 containerd[1814]: time="2025-02-13T20:46:36.794095284Z" level=info msg="TearDown network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\" successfully" Feb 13 20:46:36.794247 containerd[1814]: time="2025-02-13T20:46:36.794134724Z" level=info msg="StopPodSandbox for \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\" returns successfully" Feb 13 20:46:36.795544 containerd[1814]: time="2025-02-13T20:46:36.795258285Z" level=info msg="RemovePodSandbox for \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\"" Feb 13 20:46:36.795544 containerd[1814]: time="2025-02-13T20:46:36.795292485Z" level=info msg="Forcibly stopping sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\"" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.835 [WARNING][5832] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e04ade4-b716-4362-a7eb-ded575d07b9c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"4497e4e69d08691e6e2ed9a1c6aea09a7137915b5063a7b117895aed7025fb80", Pod:"calico-apiserver-6868ddd855-k47dx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1084bdaee82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.836 [INFO][5832] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.836 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" iface="eth0" netns="" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.836 [INFO][5832] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.836 [INFO][5832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.858 [INFO][5838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.859 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.859 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.867 [WARNING][5838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.867 [INFO][5838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" HandleID="k8s-pod-network.a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--k47dx-eth0" Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.869 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:36.872712 containerd[1814]: 2025-02-13 20:46:36.871 [INFO][5832] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb" Feb 13 20:46:36.872712 containerd[1814]: time="2025-02-13T20:46:36.872630947Z" level=info msg="TearDown network for sandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\" successfully" Feb 13 20:46:36.903942 containerd[1814]: time="2025-02-13T20:46:36.903669148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:36.903942 containerd[1814]: time="2025-02-13T20:46:36.903777188Z" level=info msg="RemovePodSandbox \"a5275c304246e8123d00393eab98c3d7a5dc89030ad8c69ecc73547d0da6f3eb\" returns successfully" Feb 13 20:46:36.904351 containerd[1814]: time="2025-02-13T20:46:36.904323429Z" level=info msg="StopPodSandbox for \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\"" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.944 [WARNING][5856] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0", GenerateName:"calico-kube-controllers-597b588699-", Namespace:"calico-system", SelfLink:"", UID:"29dedd29-2cd0-4e89-b378-f68464171a26", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597b588699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2", Pod:"calico-kube-controllers-597b588699-srxcs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab663734c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.945 [INFO][5856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.945 [INFO][5856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" iface="eth0" netns="" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.945 [INFO][5856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.945 [INFO][5856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.981 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.981 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.981 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.994 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.994 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.995 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.000672 containerd[1814]: 2025-02-13 20:46:36.997 [INFO][5856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.000672 containerd[1814]: time="2025-02-13T20:46:37.000032515Z" level=info msg="TearDown network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\" successfully" Feb 13 20:46:37.000672 containerd[1814]: time="2025-02-13T20:46:37.000062755Z" level=info msg="StopPodSandbox for \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\" returns successfully" Feb 13 20:46:37.002315 containerd[1814]: time="2025-02-13T20:46:37.001478877Z" level=info msg="RemovePodSandbox for \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\"" Feb 13 20:46:37.002403 containerd[1814]: time="2025-02-13T20:46:37.002325518Z" level=info msg="Forcibly stopping sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\"" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.039 [WARNING][5880] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0", GenerateName:"calico-kube-controllers-597b588699-", Namespace:"calico-system", SelfLink:"", UID:"29dedd29-2cd0-4e89-b378-f68464171a26", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597b588699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"b59c23bc616cbc46563d5e2245391d9aa8ca41739cf7572f1db8acf260b3aea2", Pod:"calico-kube-controllers-597b588699-srxcs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab663734c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.039 [INFO][5880] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.039 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" iface="eth0" netns="" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.039 [INFO][5880] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.039 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.059 [INFO][5886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.059 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.059 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.069 [WARNING][5886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.070 [INFO][5886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" HandleID="k8s-pod-network.ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--kube--controllers--597b588699--srxcs-eth0" Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.071 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.074845 containerd[1814]: 2025-02-13 20:46:37.073 [INFO][5880] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388" Feb 13 20:46:37.075314 containerd[1814]: time="2025-02-13T20:46:37.074905174Z" level=info msg="TearDown network for sandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\" successfully" Feb 13 20:46:37.084607 containerd[1814]: time="2025-02-13T20:46:37.084560986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:37.084676 containerd[1814]: time="2025-02-13T20:46:37.084642186Z" level=info msg="RemovePodSandbox \"ff3eb8b0fb15d70e31caee7780cc56a6fc326e37bd3adc9a667f26b12ac56388\" returns successfully" Feb 13 20:46:37.085186 containerd[1814]: time="2025-02-13T20:46:37.085152027Z" level=info msg="StopPodSandbox for \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\"" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.120 [WARNING][5905] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93087e5e-d8ec-437a-b934-6999a2c23c2f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687", Pod:"coredns-7db6d8ff4d-9vpwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298b6ef87b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.120 [INFO][5905] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.120 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" iface="eth0" netns="" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.120 [INFO][5905] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.120 [INFO][5905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.141 [INFO][5911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.143 [INFO][5911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.143 [INFO][5911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.151 [WARNING][5911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.151 [INFO][5911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.153 [INFO][5911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.156251 containerd[1814]: 2025-02-13 20:46:37.154 [INFO][5905] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.156642 containerd[1814]: time="2025-02-13T20:46:37.156296081Z" level=info msg="TearDown network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\" successfully" Feb 13 20:46:37.156642 containerd[1814]: time="2025-02-13T20:46:37.156321521Z" level=info msg="StopPodSandbox for \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\" returns successfully" Feb 13 20:46:37.156817 containerd[1814]: time="2025-02-13T20:46:37.156786481Z" level=info msg="RemovePodSandbox for \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\"" Feb 13 20:46:37.156855 containerd[1814]: time="2025-02-13T20:46:37.156821682Z" level=info msg="Forcibly stopping sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\"" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.191 [WARNING][5929] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93087e5e-d8ec-437a-b934-6999a2c23c2f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"4519402ca79568ffd5d9ec17b971b75f60533aff2f5ce193ea66624ece5ba687", Pod:"coredns-7db6d8ff4d-9vpwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298b6ef87b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.191 [INFO][5929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.191 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" iface="eth0" netns="" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.191 [INFO][5929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.191 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.213 [INFO][5936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.213 [INFO][5936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.213 [INFO][5936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.221 [WARNING][5936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.221 [INFO][5936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" HandleID="k8s-pod-network.76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--9vpwd-eth0" Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.222 [INFO][5936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.225491 containerd[1814]: 2025-02-13 20:46:37.223 [INFO][5929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb" Feb 13 20:46:37.225887 containerd[1814]: time="2025-02-13T20:46:37.225531892Z" level=info msg="TearDown network for sandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\" successfully" Feb 13 20:46:37.233850 containerd[1814]: time="2025-02-13T20:46:37.233805423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:37.233946 containerd[1814]: time="2025-02-13T20:46:37.233885223Z" level=info msg="RemovePodSandbox \"76fdcc5d5270b3aefab585a5ec355d2fe6e4fd83f18571b5494f96af4ea99dfb\" returns successfully" Feb 13 20:46:37.234395 containerd[1814]: time="2025-02-13T20:46:37.234367024Z" level=info msg="StopPodSandbox for \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\"" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.272 [WARNING][5954] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"c48df888-6f83-4248-848b-c107d69d27c0", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0", Pod:"calico-apiserver-6868ddd855-gbjvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0d7238e1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.272 [INFO][5954] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.272 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" iface="eth0" netns="" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.272 [INFO][5954] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.272 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.290 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.290 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.290 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.298 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.299 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.300 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.303062 containerd[1814]: 2025-02-13 20:46:37.301 [INFO][5954] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.303447 containerd[1814]: time="2025-02-13T20:46:37.303063274Z" level=info msg="TearDown network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\" successfully" Feb 13 20:46:37.303447 containerd[1814]: time="2025-02-13T20:46:37.303088794Z" level=info msg="StopPodSandbox for \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\" returns successfully" Feb 13 20:46:37.305150 containerd[1814]: time="2025-02-13T20:46:37.305112997Z" level=info msg="RemovePodSandbox for \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\"" Feb 13 20:46:37.305224 containerd[1814]: time="2025-02-13T20:46:37.305153957Z" level=info msg="Forcibly stopping sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\"" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.339 [WARNING][5978] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0", GenerateName:"calico-apiserver-6868ddd855-", Namespace:"calico-apiserver", SelfLink:"", UID:"c48df888-6f83-4248-848b-c107d69d27c0", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6868ddd855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"161ac58122b5b4a7ffeb90a4602a7c4b8bf84e6fa8d03c71cb2d72efd63c10d0", Pod:"calico-apiserver-6868ddd855-gbjvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0d7238e1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.339 [INFO][5978] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.339 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" iface="eth0" netns="" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.339 [INFO][5978] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.339 [INFO][5978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.360 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.360 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.360 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.368 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.368 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" HandleID="k8s-pod-network.ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-calico--apiserver--6868ddd855--gbjvf-eth0" Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.369 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.374234 containerd[1814]: 2025-02-13 20:46:37.371 [INFO][5978] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d" Feb 13 20:46:37.374626 containerd[1814]: time="2025-02-13T20:46:37.374295968Z" level=info msg="TearDown network for sandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\" successfully" Feb 13 20:46:37.384728 containerd[1814]: time="2025-02-13T20:46:37.384666542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:37.384813 containerd[1814]: time="2025-02-13T20:46:37.384788782Z" level=info msg="RemovePodSandbox \"ad589720bc38f80b74df4da3c164350271f87346b67749d7317505623a57266d\" returns successfully" Feb 13 20:46:37.385370 containerd[1814]: time="2025-02-13T20:46:37.385345743Z" level=info msg="StopPodSandbox for \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\"" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.420 [WARNING][6003] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8b4196eb-15cf-4412-9d84-5406140b93bb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274", Pod:"coredns-7db6d8ff4d-79ckq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3251d65bd7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.420 [INFO][6003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.420 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" iface="eth0" netns="" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.420 [INFO][6003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.420 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.441 [INFO][6009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.441 [INFO][6009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.441 [INFO][6009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.449 [WARNING][6009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.449 [INFO][6009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.450 [INFO][6009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.453443 containerd[1814]: 2025-02-13 20:46:37.452 [INFO][6003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.454158 containerd[1814]: time="2025-02-13T20:46:37.453485872Z" level=info msg="TearDown network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\" successfully" Feb 13 20:46:37.454158 containerd[1814]: time="2025-02-13T20:46:37.453511072Z" level=info msg="StopPodSandbox for \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\" returns successfully" Feb 13 20:46:37.454158 containerd[1814]: time="2025-02-13T20:46:37.454115873Z" level=info msg="RemovePodSandbox for \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\"" Feb 13 20:46:37.454158 containerd[1814]: time="2025-02-13T20:46:37.454147313Z" level=info msg="Forcibly stopping sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\"" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.492 [WARNING][6027] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8b4196eb-15cf-4412-9d84-5406140b93bb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"5dc88e561d782bf321ee4432cad081281e797e575311fb38c00e56a1daac0274", Pod:"coredns-7db6d8ff4d-79ckq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3251d65bd7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.492 [INFO][6027] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.492 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" iface="eth0" netns="" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.492 [INFO][6027] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.492 [INFO][6027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.511 [INFO][6033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.511 [INFO][6033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.511 [INFO][6033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.520 [WARNING][6033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.521 [INFO][6033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" HandleID="k8s-pod-network.9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-coredns--7db6d8ff4d--79ckq-eth0" Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.522 [INFO][6033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.525658 containerd[1814]: 2025-02-13 20:46:37.524 [INFO][6027] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1" Feb 13 20:46:37.526094 containerd[1814]: time="2025-02-13T20:46:37.525708167Z" level=info msg="TearDown network for sandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\" successfully" Feb 13 20:46:37.534680 containerd[1814]: time="2025-02-13T20:46:37.534628219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:37.534768 containerd[1814]: time="2025-02-13T20:46:37.534716019Z" level=info msg="RemovePodSandbox \"9c12f01769cb573d309aa7797b88a317d81864101016607bec112293060279c1\" returns successfully" Feb 13 20:46:37.535330 containerd[1814]: time="2025-02-13T20:46:37.535292100Z" level=info msg="StopPodSandbox for \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\"" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.571 [WARNING][6051] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b", Pod:"csi-node-driver-897cn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7bf7d6fdd87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.571 [INFO][6051] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.571 [INFO][6051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" iface="eth0" netns="" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.571 [INFO][6051] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.571 [INFO][6051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.590 [INFO][6057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.592 [INFO][6057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.592 [INFO][6057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.600 [WARNING][6057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.600 [INFO][6057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.602 [INFO][6057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.605426 containerd[1814]: 2025-02-13 20:46:37.604 [INFO][6051] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.605819 containerd[1814]: time="2025-02-13T20:46:37.605674473Z" level=info msg="TearDown network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\" successfully" Feb 13 20:46:37.605819 containerd[1814]: time="2025-02-13T20:46:37.605704473Z" level=info msg="StopPodSandbox for \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\" returns successfully" Feb 13 20:46:37.608064 containerd[1814]: time="2025-02-13T20:46:37.607922116Z" level=info msg="RemovePodSandbox for \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\"" Feb 13 20:46:37.608064 containerd[1814]: time="2025-02-13T20:46:37.608041716Z" level=info msg="Forcibly stopping sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\"" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.643 [WARNING][6075] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c975c533-a1a9-45d1-9ae6-e3e1fc2a3401", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-1c3e1e2868", ContainerID:"c1b978bec25bb3a1aafd0bb4cd1bc27bba42f5fbdbff3d504cf26456b7e7a08b", Pod:"csi-node-driver-897cn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7bf7d6fdd87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.644 [INFO][6075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.644 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" iface="eth0" netns="" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.644 [INFO][6075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.644 [INFO][6075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.663 [INFO][6081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.664 [INFO][6081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.664 [INFO][6081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.673 [WARNING][6081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.673 [INFO][6081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" HandleID="k8s-pod-network.746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Workload="ci--4081.3.1--a--1c3e1e2868-k8s-csi--node--driver--897cn-eth0" Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.674 [INFO][6081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:46:37.678418 containerd[1814]: 2025-02-13 20:46:37.676 [INFO][6075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8" Feb 13 20:46:37.678812 containerd[1814]: time="2025-02-13T20:46:37.678486409Z" level=info msg="TearDown network for sandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\" successfully" Feb 13 20:46:37.688079 containerd[1814]: time="2025-02-13T20:46:37.688005901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:46:37.688162 containerd[1814]: time="2025-02-13T20:46:37.688126941Z" level=info msg="RemovePodSandbox \"746a787db262a3e383fa3e9de6d98300e1c1320dae41e2f0286028c3435d06d8\" returns successfully" Feb 13 20:46:39.986164 systemd[1]: run-containerd-runc-k8s.io-710e5a1681ca913f192873be73b694bb47f320ab0853910a257c1650f9a258ec-runc.hNVKxw.mount: Deactivated successfully. Feb 13 20:46:51.406529 kubelet[3441]: I0213 20:46:51.406335 3441 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-897cn" podStartSLOduration=45.998252278 podStartE2EDuration="52.406315521s" podCreationTimestamp="2025-02-13 20:45:59 +0000 UTC" firstStartedPulling="2025-02-13 20:46:24.538730109 +0000 UTC m=+48.195393053" lastFinishedPulling="2025-02-13 20:46:30.946793352 +0000 UTC m=+54.603456296" observedRunningTime="2025-02-13 20:46:31.753303782 +0000 UTC m=+55.409966726" watchObservedRunningTime="2025-02-13 20:46:51.406315521 +0000 UTC m=+75.062978465" Feb 13 20:46:52.117406 kubelet[3441]: I0213 20:46:52.116755 3441 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:46:58.605580 kubelet[3441]: I0213 20:46:58.604967 3441 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:47:02.999145 systemd[1]: run-containerd-runc-k8s.io-710e5a1681ca913f192873be73b694bb47f320ab0853910a257c1650f9a258ec-runc.4TpCaC.mount: Deactivated successfully. Feb 13 20:47:33.013979 systemd[1]: Started sshd@7-10.200.20.21:22-10.200.16.10:42798.service - OpenSSH per-connection server daemon (10.200.16.10:42798). Feb 13 20:47:33.506771 sshd[6222]: Accepted publickey for core from 10.200.16.10 port 42798 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:47:33.508896 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:33.513657 systemd-logind[1768]: New session 10 of user core. Feb 13 20:47:33.522430 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:47:33.952785 sshd[6222]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:33.956713 systemd[1]: sshd@7-10.200.20.21:22-10.200.16.10:42798.service: Deactivated successfully. Feb 13 20:47:33.960500 systemd-logind[1768]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:47:33.960950 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:47:33.962941 systemd-logind[1768]: Removed session 10. Feb 13 20:47:39.037264 systemd[1]: Started sshd@8-10.200.20.21:22-10.200.16.10:40056.service - OpenSSH per-connection server daemon (10.200.16.10:40056). Feb 13 20:47:39.479144 sshd[6239]: Accepted publickey for core from 10.200.16.10 port 40056 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:47:39.480572 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:39.484532 systemd-logind[1768]: New session 11 of user core. Feb 13 20:47:39.490415 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:47:39.872236 sshd[6239]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:39.875079 systemd[1]: sshd@8-10.200.20.21:22-10.200.16.10:40056.service: Deactivated successfully. Feb 13 20:47:39.878950 systemd-logind[1768]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:47:39.879800 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:47:39.882080 systemd-logind[1768]: Removed session 11. Feb 13 20:47:44.956442 systemd[1]: Started sshd@9-10.200.20.21:22-10.200.16.10:40066.service - OpenSSH per-connection server daemon (10.200.16.10:40066). Feb 13 20:47:45.438755 sshd[6279]: Accepted publickey for core from 10.200.16.10 port 40066 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:47:45.440363 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:45.444391 systemd-logind[1768]: New session 12 of user core. Feb 13 20:47:45.451135 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:47:45.859578 sshd[6279]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:45.862268 systemd[1]: sshd@9-10.200.20.21:22-10.200.16.10:40066.service: Deactivated successfully. Feb 13 20:47:45.867568 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:47:45.869337 systemd-logind[1768]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:47:45.870632 systemd-logind[1768]: Removed session 12. Feb 13 20:47:45.939300 systemd[1]: Started sshd@10-10.200.20.21:22-10.200.16.10:40072.service - OpenSSH per-connection server daemon (10.200.16.10:40072). Feb 13 20:47:46.391024 sshd[6296]: Accepted publickey for core from 10.200.16.10 port 40072 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:47:46.393062 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:46.397865 systemd-logind[1768]: New session 13 of user core. Feb 13 20:47:46.402253 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:47:46.838591 sshd[6296]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:46.843135 systemd[1]: sshd@10-10.200.20.21:22-10.200.16.10:40072.service: Deactivated successfully. Feb 13 20:47:46.848308 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:47:46.849177 systemd-logind[1768]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:47:46.850481 systemd-logind[1768]: Removed session 13. Feb 13 20:47:46.917278 systemd[1]: Started sshd@11-10.200.20.21:22-10.200.16.10:40086.service - OpenSSH per-connection server daemon (10.200.16.10:40086). Feb 13 20:47:47.362058 sshd[6308]: Accepted publickey for core from 10.200.16.10 port 40086 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:47:47.363523 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:47.367486 systemd-logind[1768]: New session 14 of user core. Feb 13 20:47:47.374327 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:47:47.747107 sshd[6308]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:47.750898 systemd[1]: sshd@11-10.200.20.21:22-10.200.16.10:40086.service: Deactivated successfully. Feb 13 20:47:47.754593 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:47:47.755889 systemd-logind[1768]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:47:47.756759 systemd-logind[1768]: Removed session 14. Feb 13 20:47:52.832271 systemd[1]: Started sshd@12-10.200.20.21:22-10.200.16.10:54328.service - OpenSSH per-connection server daemon (10.200.16.10:54328). Feb 13 20:47:53.309579 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 54328 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:47:53.310939 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:53.315174 systemd-logind[1768]: New session 15 of user core. Feb 13 20:47:53.319304 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:47:53.730261 sshd[6350]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:53.734608 systemd-logind[1768]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:47:53.735276 systemd[1]: sshd@12-10.200.20.21:22-10.200.16.10:54328.service: Deactivated successfully. Feb 13 20:47:53.738463 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:47:53.740123 systemd-logind[1768]: Removed session 15. Feb 13 20:47:58.809284 systemd[1]: Started sshd@13-10.200.20.21:22-10.200.16.10:54334.service - OpenSSH per-connection server daemon (10.200.16.10:54334). Feb 13 20:47:59.252101 sshd[6368]: Accepted publickey for core from 10.200.16.10 port 54334 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:47:59.253455 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:59.258046 systemd-logind[1768]: New session 16 of user core. Feb 13 20:47:59.267374 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:47:59.642298 sshd[6368]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:59.644739 systemd[1]: sshd@13-10.200.20.21:22-10.200.16.10:54334.service: Deactivated successfully. Feb 13 20:47:59.649005 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:47:59.650882 systemd-logind[1768]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:47:59.652219 systemd-logind[1768]: Removed session 16. Feb 13 20:48:04.718302 systemd[1]: Started sshd@14-10.200.20.21:22-10.200.16.10:57530.service - OpenSSH per-connection server daemon (10.200.16.10:57530). Feb 13 20:48:05.158768 sshd[6415]: Accepted publickey for core from 10.200.16.10 port 57530 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:05.160264 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:05.164647 systemd-logind[1768]: New session 17 of user core. Feb 13 20:48:05.169317 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:48:05.558004 sshd[6415]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:05.562762 systemd[1]: sshd@14-10.200.20.21:22-10.200.16.10:57530.service: Deactivated successfully. Feb 13 20:48:05.566411 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:48:05.567830 systemd-logind[1768]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:48:05.568794 systemd-logind[1768]: Removed session 17. Feb 13 20:48:10.635319 systemd[1]: Started sshd@15-10.200.20.21:22-10.200.16.10:39284.service - OpenSSH per-connection server daemon (10.200.16.10:39284). Feb 13 20:48:11.077188 sshd[6448]: Accepted publickey for core from 10.200.16.10 port 39284 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:11.078541 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:11.083340 systemd-logind[1768]: New session 18 of user core. Feb 13 20:48:11.087303 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:48:11.470525 sshd[6448]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:11.473797 systemd[1]: sshd@15-10.200.20.21:22-10.200.16.10:39284.service: Deactivated successfully. Feb 13 20:48:11.477391 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:48:11.478791 systemd-logind[1768]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:48:11.479951 systemd-logind[1768]: Removed session 18. Feb 13 20:48:11.555431 systemd[1]: Started sshd@16-10.200.20.21:22-10.200.16.10:39288.service - OpenSSH per-connection server daemon (10.200.16.10:39288). Feb 13 20:48:12.032175 sshd[6461]: Accepted publickey for core from 10.200.16.10 port 39288 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:12.033986 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:12.038097 systemd-logind[1768]: New session 19 of user core. Feb 13 20:48:12.046338 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:48:12.547422 sshd[6461]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:12.550339 systemd[1]: sshd@16-10.200.20.21:22-10.200.16.10:39288.service: Deactivated successfully. Feb 13 20:48:12.554975 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:48:12.555225 systemd-logind[1768]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:48:12.557533 systemd-logind[1768]: Removed session 19. Feb 13 20:48:12.626483 systemd[1]: Started sshd@17-10.200.20.21:22-10.200.16.10:39290.service - OpenSSH per-connection server daemon (10.200.16.10:39290). Feb 13 20:48:13.086085 sshd[6473]: Accepted publickey for core from 10.200.16.10 port 39290 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:13.087382 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:13.096793 systemd-logind[1768]: New session 20 of user core. Feb 13 20:48:13.105361 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:48:15.174324 sshd[6473]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:15.177793 systemd[1]: sshd@17-10.200.20.21:22-10.200.16.10:39290.service: Deactivated successfully. Feb 13 20:48:15.181715 systemd-logind[1768]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:48:15.182504 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:48:15.184190 systemd-logind[1768]: Removed session 20. Feb 13 20:48:15.256580 systemd[1]: Started sshd@18-10.200.20.21:22-10.200.16.10:39296.service - OpenSSH per-connection server daemon (10.200.16.10:39296). Feb 13 20:48:15.735539 sshd[6494]: Accepted publickey for core from 10.200.16.10 port 39296 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:15.737120 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:15.741182 systemd-logind[1768]: New session 21 of user core. Feb 13 20:48:15.750305 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:48:16.286250 sshd[6494]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:16.289689 systemd[1]: sshd@18-10.200.20.21:22-10.200.16.10:39296.service: Deactivated successfully. Feb 13 20:48:16.292629 systemd-logind[1768]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:48:16.292897 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:48:16.294769 systemd-logind[1768]: Removed session 21. Feb 13 20:48:16.362343 systemd[1]: Started sshd@19-10.200.20.21:22-10.200.16.10:39308.service - OpenSSH per-connection server daemon (10.200.16.10:39308). Feb 13 20:48:16.813227 sshd[6505]: Accepted publickey for core from 10.200.16.10 port 39308 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:16.815232 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:16.819187 systemd-logind[1768]: New session 22 of user core. Feb 13 20:48:16.826394 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:48:17.195281 sshd[6505]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:17.199922 systemd-logind[1768]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:48:17.200073 systemd[1]: sshd@19-10.200.20.21:22-10.200.16.10:39308.service: Deactivated successfully. Feb 13 20:48:17.203318 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:48:17.204514 systemd-logind[1768]: Removed session 22. Feb 13 20:48:22.274252 systemd[1]: Started sshd@20-10.200.20.21:22-10.200.16.10:57544.service - OpenSSH per-connection server daemon (10.200.16.10:57544). Feb 13 20:48:22.718349 sshd[6545]: Accepted publickey for core from 10.200.16.10 port 57544 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:22.719754 sshd[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:22.723968 systemd-logind[1768]: New session 23 of user core. Feb 13 20:48:22.728373 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:48:23.105805 sshd[6545]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:23.110475 systemd[1]: sshd@20-10.200.20.21:22-10.200.16.10:57544.service: Deactivated successfully. Feb 13 20:48:23.111749 systemd-logind[1768]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:48:23.113726 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:48:23.115578 systemd-logind[1768]: Removed session 23. Feb 13 20:48:28.195295 systemd[1]: Started sshd@21-10.200.20.21:22-10.200.16.10:57556.service - OpenSSH per-connection server daemon (10.200.16.10:57556). Feb 13 20:48:28.671442 sshd[6561]: Accepted publickey for core from 10.200.16.10 port 57556 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:28.672812 sshd[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:28.676588 systemd-logind[1768]: New session 24 of user core. Feb 13 20:48:28.682457 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:48:29.089088 sshd[6561]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:29.092745 systemd[1]: sshd@21-10.200.20.21:22-10.200.16.10:57556.service: Deactivated successfully. Feb 13 20:48:29.096917 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:48:29.098462 systemd-logind[1768]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:48:29.100488 systemd-logind[1768]: Removed session 24. Feb 13 20:48:34.168290 systemd[1]: Started sshd@22-10.200.20.21:22-10.200.16.10:56360.service - OpenSSH per-connection server daemon (10.200.16.10:56360). Feb 13 20:48:34.608822 sshd[6574]: Accepted publickey for core from 10.200.16.10 port 56360 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:34.610237 sshd[6574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:34.614321 systemd-logind[1768]: New session 25 of user core. Feb 13 20:48:34.622453 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:48:35.006773 sshd[6574]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:35.010203 systemd[1]: sshd@22-10.200.20.21:22-10.200.16.10:56360.service: Deactivated successfully. Feb 13 20:48:35.010253 systemd-logind[1768]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:48:35.014045 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:48:35.015596 systemd-logind[1768]: Removed session 25. Feb 13 20:48:40.098283 systemd[1]: Started sshd@23-10.200.20.21:22-10.200.16.10:33370.service - OpenSSH per-connection server daemon (10.200.16.10:33370). Feb 13 20:48:40.537688 sshd[6610]: Accepted publickey for core from 10.200.16.10 port 33370 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:40.539157 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:40.544904 systemd-logind[1768]: New session 26 of user core. Feb 13 20:48:40.549319 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:48:40.930623 sshd[6610]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:40.934538 systemd[1]: sshd@23-10.200.20.21:22-10.200.16.10:33370.service: Deactivated successfully. Feb 13 20:48:40.938203 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:48:40.939237 systemd-logind[1768]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:48:40.940954 systemd-logind[1768]: Removed session 26. Feb 13 20:48:46.008425 systemd[1]: Started sshd@24-10.200.20.21:22-10.200.16.10:33374.service - OpenSSH per-connection server daemon (10.200.16.10:33374). Feb 13 20:48:46.447502 sshd[6625]: Accepted publickey for core from 10.200.16.10 port 33374 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:46.448860 sshd[6625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:46.453603 systemd-logind[1768]: New session 27 of user core. Feb 13 20:48:46.458294 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:48:46.849248 sshd[6625]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:46.852838 systemd[1]: sshd@24-10.200.20.21:22-10.200.16.10:33374.service: Deactivated successfully. Feb 13 20:48:46.856615 systemd-logind[1768]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:48:46.857081 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:48:46.858661 systemd-logind[1768]: Removed session 27. Feb 13 20:48:51.927256 systemd[1]: Started sshd@25-10.200.20.21:22-10.200.16.10:59902.service - OpenSSH per-connection server daemon (10.200.16.10:59902). Feb 13 20:48:52.371758 sshd[6662]: Accepted publickey for core from 10.200.16.10 port 59902 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8 Feb 13 20:48:52.373065 sshd[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:52.378059 systemd-logind[1768]: New session 28 of user core. Feb 13 20:48:52.384316 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:48:52.756669 sshd[6662]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:52.761047 systemd[1]: sshd@25-10.200.20.21:22-10.200.16.10:59902.service: Deactivated successfully. Feb 13 20:48:52.765407 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:48:52.766595 systemd-logind[1768]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:48:52.767948 systemd-logind[1768]: Removed session 28.