Jan 15 12:48:44.344994 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:48:44.345017 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:48:44.345025 kernel: KASLR enabled Jan 15 12:48:44.345031 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:48:44.345038 kernel: printk: bootconsole [pl11] enabled Jan 15 12:48:44.345044 kernel: efi: EFI v2.7 by EDK II Jan 15 12:48:44.345051 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:48:44.345057 kernel: random: crng init done Jan 15 12:48:44.345063 kernel: ACPI: Early table checksum verification disabled Jan 15 12:48:44.345069 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:48:44.345075 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345081 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345088 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:48:44.345095 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345102 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345108 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345115 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345123 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345129 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345136 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:48:44.345142 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345148 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:48:44.345155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:48:44.345161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:48:44.345167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:48:44.345174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:48:44.345180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:48:44.345187 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:48:44.345194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:48:44.345201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:48:44.345207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:48:44.345213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:48:44.345219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:48:44.345226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:48:44.345232 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 15 12:48:44.345238 kernel: Zone ranges: Jan 15 12:48:44.345244 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:48:44.345250 kernel: DMA32 empty Jan 15 12:48:44.345257 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:44.345263 kernel: Movable zone start for each node Jan 15 12:48:44.345274 kernel: Early memory node ranges Jan 15 12:48:44.345281 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:48:44.345287 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:48:44.345294 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:48:44.345301 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:48:44.345309 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:48:44.345316 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:48:44.345322 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:44.345329 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:48:44.345336 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:48:44.345343 kernel: psci: probing for conduit method from ACPI. Jan 15 12:48:44.345349 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:48:44.345356 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:48:44.347406 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:48:44.347419 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:48:44.347426 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:48:44.347432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:48:44.347444 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:48:44.347451 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:48:44.347458 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:48:44.347465 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:48:44.347472 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:48:44.347479 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:48:44.347490 kernel: CPU features: detected: Spectre-BHB Jan 15 12:48:44.347497 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:48:44.347504 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:48:44.347511 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:48:44.347517 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:48:44.347526 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:48:44.347532 kernel: alternatives: applying boot alternatives Jan 15 12:48:44.347541 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:44.347549 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:48:44.347556 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:48:44.347563 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:48:44.347570 kernel: Fallback order for Node 0: 0 Jan 15 12:48:44.347576 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:48:44.347583 kernel: Policy zone: Normal Jan 15 12:48:44.347590 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:48:44.347597 kernel: software IO TLB: area num 2. Jan 15 12:48:44.347605 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:48:44.347613 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Jan 15 12:48:44.347620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:48:44.347627 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:48:44.347634 kernel: rcu: RCU event tracing is enabled. Jan 15 12:48:44.347641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:48:44.347648 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:48:44.347655 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:48:44.347661 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:48:44.347668 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:48:44.347675 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:48:44.347683 kernel: GICv3: 960 SPIs implemented Jan 15 12:48:44.347690 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:48:44.347696 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:48:44.347703 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:48:44.347710 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:48:44.347717 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:48:44.347724 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:48:44.347730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:44.347737 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:48:44.347744 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:48:44.347751 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:48:44.347760 kernel: Console: colour dummy device 80x25 Jan 15 12:48:44.347767 kernel: printk: console [tty1] enabled Jan 15 12:48:44.347774 kernel: ACPI: Core revision 20230628 Jan 15 12:48:44.347781 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:48:44.347788 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:48:44.347795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:48:44.347802 kernel: landlock: Up and running. Jan 15 12:48:44.347809 kernel: SELinux: Initializing. Jan 15 12:48:44.347816 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.347823 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.347832 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:44.347839 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:44.347846 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:48:44.347853 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:48:44.347860 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:48:44.347867 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:48:44.347874 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:48:44.347887 kernel: Remapping and enabling EFI services. Jan 15 12:48:44.347895 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:48:44.347902 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:48:44.347909 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:48:44.347918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:44.347925 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:48:44.347933 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:48:44.347940 kernel: SMP: Total of 2 processors activated. Jan 15 12:48:44.347947 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:48:44.347956 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:48:44.347964 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:48:44.347971 kernel: CPU features: detected: CRC32 instructions Jan 15 12:48:44.347978 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:48:44.347985 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:48:44.347993 kernel: CPU features: detected: Privileged Access Never Jan 15 12:48:44.348000 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:48:44.348007 kernel: alternatives: applying system-wide alternatives Jan 15 12:48:44.348015 kernel: devtmpfs: initialized Jan 15 12:48:44.348024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:48:44.348032 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:48:44.348039 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:48:44.348060 kernel: SMBIOS 3.1.0 present. Jan 15 12:48:44.348068 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:48:44.348075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:48:44.348083 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:48:44.348090 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:48:44.348098 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:48:44.348107 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:48:44.348114 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:48:44.348122 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:48:44.348129 kernel: cpuidle: using governor menu Jan 15 12:48:44.348136 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:48:44.348144 kernel: ASID allocator initialised with 32768 entries Jan 15 12:48:44.348152 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:48:44.348159 kernel: Serial: AMBA PL011 UART driver Jan 15 12:48:44.348166 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:48:44.348175 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:48:44.348182 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:48:44.348190 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:48:44.348197 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:48:44.348205 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:48:44.348212 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:48:44.348220 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:48:44.348227 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:48:44.348235 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:48:44.348243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:48:44.348251 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:48:44.348258 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:48:44.348266 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:48:44.348273 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:48:44.348280 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:48:44.348288 kernel: ACPI: Interpreter enabled Jan 15 12:48:44.348295 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:48:44.348302 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:48:44.348311 kernel: printk: console [ttyAMA0] enabled Jan 15 12:48:44.348318 kernel: printk: bootconsole [pl11] disabled Jan 15 12:48:44.348326 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:48:44.348333 kernel: iommu: Default domain type: Translated Jan 15 12:48:44.348340 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:48:44.348348 kernel: efivars: Registered efivars operations Jan 15 12:48:44.348355 kernel: vgaarb: loaded Jan 15 12:48:44.348393 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:48:44.348400 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:48:44.348410 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:48:44.348418 kernel: pnp: PnP ACPI init Jan 15 12:48:44.348425 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:48:44.348432 kernel: NET: Registered PF_INET protocol family Jan 15 12:48:44.348440 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:48:44.348447 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:48:44.348454 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:48:44.348462 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:48:44.348469 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:48:44.348481 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:48:44.348489 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.348496 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.348504 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:48:44.348511 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:48:44.348518 kernel: kvm [1]: HYP mode not available Jan 15 12:48:44.348525 kernel: Initialise system trusted keyrings Jan 15 12:48:44.348533 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:48:44.348540 kernel: Key type asymmetric registered Jan 15 12:48:44.348549 kernel: Asymmetric key parser 'x509' registered Jan 15 12:48:44.348556 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:48:44.348563 kernel: io scheduler mq-deadline registered Jan 15 12:48:44.348571 kernel: io scheduler kyber registered Jan 15 12:48:44.348578 kernel: io scheduler bfq registered Jan 15 12:48:44.348585 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:48:44.348592 kernel: thunder_xcv, ver 1.0 Jan 15 12:48:44.348600 kernel: thunder_bgx, ver 1.0 Jan 15 12:48:44.348607 kernel: nicpf, ver 1.0 Jan 15 12:48:44.348614 kernel: nicvf, ver 1.0 Jan 15 12:48:44.348749 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:48:44.348820 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:48:43 UTC (1736945323) Jan 15 12:48:44.348831 kernel: efifb: probing for efifb Jan 15 12:48:44.348838 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:48:44.348846 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:48:44.348853 kernel: efifb: scrolling: redraw Jan 15 12:48:44.348861 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:48:44.348871 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:48:44.348878 kernel: fb0: EFI VGA frame buffer device Jan 15 12:48:44.348886 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:48:44.348894 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:48:44.348901 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:48:44.348908 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:48:44.348916 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:48:44.348923 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:48:44.348931 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:48:44.348939 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:48:44.348947 kernel: Segment Routing with IPv6 Jan 15 12:48:44.348954 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:48:44.348962 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:48:44.348969 kernel: Key type dns_resolver registered Jan 15 12:48:44.348977 kernel: registered taskstats version 1 Jan 15 12:48:44.348984 kernel: Loading compiled-in X.509 certificates Jan 15 12:48:44.348991 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:48:44.348999 kernel: Key type .fscrypt registered Jan 15 12:48:44.349007 kernel: Key type fscrypt-provisioning registered Jan 15 12:48:44.349015 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:48:44.349022 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:48:44.349029 kernel: ima: No architecture policies found Jan 15 12:48:44.349037 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:48:44.349044 kernel: clk: Disabling unused clocks Jan 15 12:48:44.349051 kernel: Freeing unused kernel memory: 39360K Jan 15 12:48:44.349059 kernel: Run /init as init process Jan 15 12:48:44.349066 kernel: with arguments: Jan 15 12:48:44.349075 kernel: /init Jan 15 12:48:44.349082 kernel: with environment: Jan 15 12:48:44.349089 kernel: HOME=/ Jan 15 12:48:44.349096 kernel: TERM=linux Jan 15 12:48:44.349103 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:48:44.349113 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:48:44.349122 systemd[1]: Detected virtualization microsoft. Jan 15 12:48:44.349130 systemd[1]: Detected architecture arm64. Jan 15 12:48:44.349139 systemd[1]: Running in initrd. Jan 15 12:48:44.349147 systemd[1]: No hostname configured, using default hostname. Jan 15 12:48:44.349155 systemd[1]: Hostname set to . Jan 15 12:48:44.349163 systemd[1]: Initializing machine ID from random generator. Jan 15 12:48:44.349171 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:48:44.349179 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:44.349187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:44.349196 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:48:44.349205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:48:44.349213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:48:44.349221 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:48:44.349231 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:48:44.349239 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:48:44.349247 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:44.349257 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:44.349265 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:48:44.349272 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:48:44.349280 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:48:44.349288 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:48:44.349296 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:44.349304 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:44.349312 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:48:44.349320 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:48:44.349329 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:44.349337 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:44.349345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:44.349353 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:48:44.354512 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:48:44.354548 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:48:44.354566 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:48:44.354581 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:48:44.354597 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:48:44.354624 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:48:44.354679 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:48:44.354701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:44.354710 systemd-journald[217]: Journal started Jan 15 12:48:44.354732 systemd-journald[217]: Runtime Journal (/run/log/journal/d615df8a5b214f81a712e3a4a82b8a37) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:48:44.355209 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:48:44.369655 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:48:44.380647 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:44.405901 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:48:44.405928 kernel: Bridge firewalling registered Jan 15 12:48:44.398015 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:48:44.398452 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:44.413281 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:48:44.422353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:44.435577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:44.459998 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:44.469550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:48:44.496760 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:48:44.521584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:48:44.529175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:44.543803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:44.550508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:48:44.568384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:44.598723 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:48:44.607570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:48:44.631892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:48:44.648537 dracut-cmdline[251]: dracut-dracut-053 Jan 15 12:48:44.659812 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:44.655703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:44.672857 systemd-resolved[252]: Positive Trust Anchors: Jan 15 12:48:44.672868 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:48:44.672899 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:48:44.675072 systemd-resolved[252]: Defaulting to hostname 'linux'. Jan 15 12:48:44.694589 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:48:44.717590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:44.802382 kernel: SCSI subsystem initialized Jan 15 12:48:44.811383 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:48:44.821385 kernel: iscsi: registered transport (tcp) Jan 15 12:48:44.841451 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:48:44.841513 kernel: QLogic iSCSI HBA Driver Jan 15 12:48:44.883873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:44.903552 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:48:44.936026 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:48:44.936068 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:48:44.936418 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:48:44.993387 kernel: raid6: neonx8 gen() 15741 MB/s Jan 15 12:48:45.013375 kernel: raid6: neonx4 gen() 15626 MB/s Jan 15 12:48:45.033378 kernel: raid6: neonx2 gen() 13246 MB/s Jan 15 12:48:45.054379 kernel: raid6: neonx1 gen() 10486 MB/s Jan 15 12:48:45.074379 kernel: raid6: int64x8 gen() 6956 MB/s Jan 15 12:48:45.094373 kernel: raid6: int64x4 gen() 7341 MB/s Jan 15 12:48:45.115377 kernel: raid6: int64x2 gen() 6127 MB/s Jan 15 12:48:45.139322 kernel: raid6: int64x1 gen() 5058 MB/s Jan 15 12:48:45.139350 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Jan 15 12:48:45.164127 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Jan 15 12:48:45.164143 kernel: raid6: using neon recovery algorithm Jan 15 12:48:45.176860 kernel: xor: measuring software checksum speed Jan 15 12:48:45.176891 kernel: 8regs : 19702 MB/sec Jan 15 12:48:45.180606 kernel: 32regs : 19622 MB/sec Jan 15 12:48:45.184197 kernel: arm64_neon : 26927 MB/sec Jan 15 12:48:45.188824 kernel: xor: using function: arm64_neon (26927 MB/sec) Jan 15 12:48:45.240425 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:48:45.252424 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:45.270518 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:45.295355 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 15 12:48:45.301028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:45.320541 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:48:45.350384 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 15 12:48:45.380469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:45.403700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:48:45.447397 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:45.472103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:48:45.500896 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:45.513671 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:45.530358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:45.542467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:48:45.568415 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:48:45.568626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:48:45.613129 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:48:45.613160 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:48:45.613171 kernel: PTP clock support registered Jan 15 12:48:45.613180 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:48:45.594950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:45.642176 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:48:45.642200 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:48:45.642218 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:48:45.595103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:45.477409 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:48:45.489591 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:48:45.489610 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:48:45.489618 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:48:45.489629 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:48:45.489637 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:48:45.489646 systemd-journald[217]: Time jumped backwards, rotating. Jan 15 12:48:45.489683 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:48:45.637639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:45.524487 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:48:45.661029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:45.545091 kernel: scsi host1: storvsc_host_t Jan 15 12:48:45.661255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.570691 kernel: scsi host0: storvsc_host_t Jan 15 12:48:45.570862 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:48:45.570893 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:48:45.462240 systemd-resolved[252]: Clock change detected. Flushing caches. Jan 15 12:48:45.498465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.518387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.537489 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:45.557186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.635464 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: VF slot 1 added Jan 15 12:48:45.584591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:45.655199 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:48:45.584657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.596935 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.678677 kernel: hv_pci 30b0463b-1ae6-448a-9ff8-e358d0752b6d: PCI VMBus probing: Using version 0x10004 Jan 15 12:48:45.785263 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 12:48:45.785397 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:48:45.785407 kernel: hv_pci 30b0463b-1ae6-448a-9ff8-e358d0752b6d: PCI host bridge to bus 1ae6:00 Jan 15 12:48:45.785500 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:48:45.785587 kernel: pci_bus 1ae6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:48:45.785687 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:48:45.785777 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:48:45.785859 kernel: pci_bus 1ae6:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:48:45.785933 kernel: pci 1ae6:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:48:45.786073 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 12:48:45.786160 kernel: pci 1ae6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:45.786244 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:48:45.786325 kernel: pci 1ae6:00:02.0: enabling Extended Tags Jan 15 12:48:45.786408 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:48:45.786489 kernel: pci 1ae6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1ae6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:48:45.786570 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:45.786580 kernel: pci_bus 1ae6:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:48:45.786654 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 12:48:45.786737 kernel: pci 1ae6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:45.648220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.687426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.717733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:45.835854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:45.860452 kernel: mlx5_core 1ae6:00:02.0: enabling device (0000 -> 0002) Jan 15 12:48:46.104656 kernel: mlx5_core 1ae6:00:02.0: firmware version: 16.30.1284 Jan 15 12:48:46.104878 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: VF registering: eth1 Jan 15 12:48:46.104973 kernel: mlx5_core 1ae6:00:02.0 eth1: joined to eth0 Jan 15 12:48:46.106217 kernel: mlx5_core 1ae6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:48:46.115021 kernel: mlx5_core 1ae6:00:02.0 enP6886s1: renamed from eth1 Jan 15 12:48:46.339608 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:48:46.367318 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (488) Jan 15 12:48:46.367358 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Jan 15 12:48:46.397288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:48:46.409554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:48:46.420897 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:48:46.450298 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:48:46.474045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:46.480021 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:46.489026 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:46.562438 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:48:46.614317 (udev-worker)[483]: sda9: Failed to create/update device symlink '/dev/disk/by-partlabel/ROOT', ignoring: No such file or directory Jan 15 12:48:47.489821 disk-uuid[606]: The operation has completed successfully. Jan 15 12:48:47.494932 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:47.548495 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:48:47.548609 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:48:47.582217 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:48:47.596112 sh[719]: Success Jan 15 12:48:47.637045 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:48:47.825464 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:48:47.835119 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:48:47.844377 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:48:47.874056 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:48:47.874095 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:47.881071 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:48:47.886292 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:48:47.890686 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:48:48.184305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:48:48.189715 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:48:48.210308 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:48:48.218207 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:48:48.258123 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:48.258182 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:48.258192 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:48.279056 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:48.293912 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:48:48.299209 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:48.305808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:48:48.320448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:48:48.345576 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:48.369243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:48:48.394505 systemd-networkd[903]: lo: Link UP Jan 15 12:48:48.394517 systemd-networkd[903]: lo: Gained carrier Jan 15 12:48:48.396493 systemd-networkd[903]: Enumeration completed Jan 15 12:48:48.398865 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:48.398869 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:48:48.399173 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:48:48.409074 systemd[1]: Reached target network.target - Network. Jan 15 12:48:48.497017 kernel: mlx5_core 1ae6:00:02.0 enP6886s1: Link up Jan 15 12:48:48.545018 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: Data path switched to VF: enP6886s1 Jan 15 12:48:48.545934 systemd-networkd[903]: enP6886s1: Link UP Jan 15 12:48:48.546196 systemd-networkd[903]: eth0: Link UP Jan 15 12:48:48.546549 systemd-networkd[903]: eth0: Gained carrier Jan 15 12:48:48.546558 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:48.573282 systemd-networkd[903]: enP6886s1: Gained carrier Jan 15 12:48:48.585034 systemd-networkd[903]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:48:49.260128 ignition[889]: Ignition 2.19.0 Jan 15 12:48:49.260140 ignition[889]: Stage: fetch-offline Jan 15 12:48:49.264927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:49.260177 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.281229 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:48:49.260185 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.260276 ignition[889]: parsed url from cmdline: "" Jan 15 12:48:49.260279 ignition[889]: no config URL provided Jan 15 12:48:49.260283 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:49.260290 ignition[889]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:49.260294 ignition[889]: failed to fetch config: resource requires networking Jan 15 12:48:49.260479 ignition[889]: Ignition finished successfully Jan 15 12:48:49.295291 ignition[911]: Ignition 2.19.0 Jan 15 12:48:49.295838 ignition[911]: Stage: fetch Jan 15 12:48:49.296072 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.296083 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.296214 ignition[911]: parsed url from cmdline: "" Jan 15 12:48:49.296218 ignition[911]: no config URL provided Jan 15 12:48:49.296222 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:49.296230 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:49.296262 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:48:49.386687 ignition[911]: GET result: OK Jan 15 12:48:49.386807 ignition[911]: config has been read from IMDS userdata Jan 15 12:48:49.386883 ignition[911]: parsing config with SHA512: ce01eba26a1f18f21a02b458829b6c91dbfdfa474cb4de2eed53561ce8b6f35744866081a6d18451bfa249de957ab5fbbc26ac5f5068e39ddf0ed98d57e73dd9 Jan 15 12:48:49.391132 unknown[911]: fetched base config from "system" Jan 15 12:48:49.391557 ignition[911]: fetch: fetch complete Jan 15 12:48:49.391138 unknown[911]: fetched base config from "system" Jan 15 12:48:49.391562 ignition[911]: fetch: fetch passed Jan 15 12:48:49.391143 unknown[911]: fetched user config from "azure" Jan 15 12:48:49.391602 ignition[911]: Ignition finished successfully Jan 15 12:48:49.396212 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:48:49.423151 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:48:49.445927 ignition[917]: Ignition 2.19.0 Jan 15 12:48:49.445939 ignition[917]: Stage: kargs Jan 15 12:48:49.450933 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:48:49.447578 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.447597 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.448618 ignition[917]: kargs: kargs passed Jan 15 12:48:49.472338 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:48:49.448670 ignition[917]: Ignition finished successfully Jan 15 12:48:49.493062 ignition[924]: Ignition 2.19.0 Jan 15 12:48:49.498696 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:48:49.493068 ignition[924]: Stage: disks Jan 15 12:48:49.508411 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:49.493277 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.521137 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:48:49.493286 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.531141 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:48:49.494218 ignition[924]: disks: disks passed Jan 15 12:48:49.542688 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:48:49.494265 ignition[924]: Ignition finished successfully Jan 15 12:48:49.553045 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:48:49.577271 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:48:49.649460 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:48:49.658420 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:48:49.675149 systemd-networkd[903]: enP6886s1: Gained IPv6LL Jan 15 12:48:49.679208 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:48:49.737023 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:48:49.737261 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:48:49.742324 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:48:49.788112 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:49.796146 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:48:49.806239 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:48:49.841578 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Jan 15 12:48:49.841603 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:49.834273 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:48:49.872493 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:49.872518 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:49.834314 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:49.855918 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:48:49.896309 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:49.897191 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:48:49.903979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:49.931136 systemd-networkd[903]: eth0: Gained IPv6LL Jan 15 12:48:50.361461 coreos-metadata[945]: Jan 15 12:48:50.361 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:48:50.372438 coreos-metadata[945]: Jan 15 12:48:50.372 INFO Fetch successful Jan 15 12:48:50.377575 coreos-metadata[945]: Jan 15 12:48:50.372 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:48:50.390450 coreos-metadata[945]: Jan 15 12:48:50.390 INFO Fetch successful Jan 15 12:48:50.406608 coreos-metadata[945]: Jan 15 12:48:50.406 INFO wrote hostname ci-4081.3.0-a-b8bd16053a to /sysroot/etc/hostname Jan 15 12:48:50.416042 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:50.732026 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:48:50.775764 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:48:50.810723 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:48:50.817147 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:48:51.667523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:51.683241 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:48:51.703060 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:51.703194 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:48:51.711847 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:48:51.736064 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:48:51.751144 ignition[1061]: INFO : Ignition 2.19.0 Jan 15 12:48:51.751144 ignition[1061]: INFO : Stage: mount Jan 15 12:48:51.761013 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.761013 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.761013 ignition[1061]: INFO : mount: mount passed Jan 15 12:48:51.761013 ignition[1061]: INFO : Ignition finished successfully Jan 15 12:48:51.759340 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:48:51.781237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:48:51.801207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:51.835521 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1073) Jan 15 12:48:51.835543 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:51.835553 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:51.846076 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:51.853009 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:51.854496 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:51.888075 ignition[1090]: INFO : Ignition 2.19.0 Jan 15 12:48:51.888075 ignition[1090]: INFO : Stage: files Jan 15 12:48:51.898108 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.898108 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.898108 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:48:51.918901 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:48:51.918901 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:48:51.966151 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:48:51.974149 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:48:51.974149 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:48:51.966549 unknown[1090]: wrote ssh authorized keys file for user: core Jan 15 12:48:51.995691 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:51.995691 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:48:52.044409 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 12:48:52.175960 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 15 12:48:52.642219 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 12:48:52.855840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.855840 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 12:48:52.891267 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: files passed Jan 15 12:48:52.902063 ignition[1090]: INFO : Ignition finished successfully Jan 15 12:48:52.920298 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:48:52.962677 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:48:52.979659 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:48:52.998921 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:48:52.999043 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:48:53.035824 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:53.035824 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:53.054688 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:53.068046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:53.085849 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:48:53.105521 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:48:53.141911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:48:53.144081 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:48:53.155116 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:48:53.167613 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:48:53.179043 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:48:53.195288 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:48:53.221329 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:53.241224 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:48:53.263735 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:53.282975 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:53.297572 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:48:53.311043 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:48:53.311231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:53.332831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:48:53.345618 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:48:53.356883 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:48:53.364752 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:53.377921 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:53.391352 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:48:53.404276 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:53.423899 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:48:53.436203 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:48:53.448435 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:48:53.457866 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:48:53.458041 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:53.469792 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:53.482975 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:53.496387 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:48:53.496675 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:53.512117 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:48:53.512268 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:53.525374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:48:53.525506 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:53.547169 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:48:53.547287 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:48:53.559472 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:48:53.559598 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:53.608252 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:48:53.621747 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:48:53.637083 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:48:53.637262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:53.651809 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:48:53.674866 ignition[1143]: INFO : Ignition 2.19.0 Jan 15 12:48:53.674866 ignition[1143]: INFO : Stage: umount Jan 15 12:48:53.651924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:53.696269 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:53.696269 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:53.696269 ignition[1143]: INFO : umount: umount passed Jan 15 12:48:53.696269 ignition[1143]: INFO : Ignition finished successfully Jan 15 12:48:53.702139 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:48:53.702252 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:48:53.716594 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:48:53.717159 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:48:53.717245 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:48:53.731375 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:48:53.731441 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:48:53.748242 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:48:53.748312 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:48:53.759441 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:48:53.759498 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:48:53.770260 systemd[1]: Stopped target network.target - Network. Jan 15 12:48:53.781525 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:48:53.781601 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:53.796731 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:48:53.808395 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:48:53.820051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:53.832553 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:48:53.844257 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:48:53.855586 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:48:53.855631 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:53.866695 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:48:53.866755 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:53.878646 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:48:53.878745 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:48:53.884722 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:48:53.884780 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:53.897110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:48:53.913772 systemd-networkd[903]: eth0: DHCPv6 lease lost Jan 15 12:48:53.913891 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:48:53.928723 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:48:53.928823 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:48:53.934723 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:48:53.934823 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:48:53.951372 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:48:53.951614 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:48:53.970709 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:48:53.970787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:53.981807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:48:53.981885 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:54.011248 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:48:54.177865 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: Data path switched from VF: enP6886s1 Jan 15 12:48:54.022396 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:48:54.022479 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:54.035728 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:48:54.035788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:54.046375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:48:54.046435 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:54.061155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:48:54.061217 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:54.075207 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:54.130963 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:48:54.131189 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:54.151411 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:48:54.151458 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:54.171808 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:48:54.171858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:54.178377 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:48:54.178428 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:54.197605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:48:54.197673 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:54.209709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:54.209768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:54.242739 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:48:54.263121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:48:54.263209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:54.278945 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 12:48:54.279035 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:48:54.292320 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:48:54.292385 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:54.307151 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:54.307214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:54.321209 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:48:54.321297 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:48:54.333198 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:48:54.339895 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:48:54.353256 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:48:54.507213 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 15 12:48:54.371325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:48:54.410551 systemd[1]: Switching root. Jan 15 12:48:54.516598 systemd-journald[217]: Journal stopped Jan 15 12:48:44.344994 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:48:44.345017 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:48:44.345025 kernel: KASLR enabled Jan 15 12:48:44.345031 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:48:44.345038 kernel: printk: bootconsole [pl11] enabled Jan 15 12:48:44.345044 kernel: efi: EFI v2.7 by EDK II Jan 15 12:48:44.345051 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:48:44.345057 kernel: random: crng init done Jan 15 12:48:44.345063 kernel: ACPI: Early table checksum verification disabled Jan 15 12:48:44.345069 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:48:44.345075 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345081 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345088 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:48:44.345095 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345102 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345108 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345115 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345123 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345129 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345136 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:48:44.345142 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:48:44.345148 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:48:44.345155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:48:44.345161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:48:44.345167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:48:44.345174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:48:44.345180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:48:44.345187 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:48:44.345194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:48:44.345201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:48:44.345207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:48:44.345213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:48:44.345219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:48:44.345226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:48:44.345232 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 15 12:48:44.345238 kernel: Zone ranges: Jan 15 12:48:44.345244 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:48:44.345250 kernel: DMA32 empty Jan 15 12:48:44.345257 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:44.345263 kernel: Movable zone start for each node Jan 15 12:48:44.345274 kernel: Early memory node ranges Jan 15 12:48:44.345281 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:48:44.345287 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:48:44.345294 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:48:44.345301 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:48:44.345309 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:48:44.345316 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:48:44.345322 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:48:44.345329 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:48:44.345336 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:48:44.345343 kernel: psci: probing for conduit method from ACPI. Jan 15 12:48:44.345349 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:48:44.345356 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:48:44.347406 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:48:44.347419 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:48:44.347426 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:48:44.347432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:48:44.347444 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:48:44.347451 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:48:44.347458 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:48:44.347465 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:48:44.347472 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:48:44.347479 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:48:44.347490 kernel: CPU features: detected: Spectre-BHB Jan 15 12:48:44.347497 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:48:44.347504 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:48:44.347511 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:48:44.347517 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:48:44.347526 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:48:44.347532 kernel: alternatives: applying boot alternatives Jan 15 12:48:44.347541 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:44.347549 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:48:44.347556 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:48:44.347563 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:48:44.347570 kernel: Fallback order for Node 0: 0 Jan 15 12:48:44.347576 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:48:44.347583 kernel: Policy zone: Normal Jan 15 12:48:44.347590 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:48:44.347597 kernel: software IO TLB: area num 2. Jan 15 12:48:44.347605 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:48:44.347613 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Jan 15 12:48:44.347620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:48:44.347627 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:48:44.347634 kernel: rcu: RCU event tracing is enabled. Jan 15 12:48:44.347641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:48:44.347648 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:48:44.347655 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:48:44.347661 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:48:44.347668 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:48:44.347675 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:48:44.347683 kernel: GICv3: 960 SPIs implemented Jan 15 12:48:44.347690 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:48:44.347696 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:48:44.347703 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:48:44.347710 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:48:44.347717 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:48:44.347724 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:48:44.347730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:44.347737 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:48:44.347744 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:48:44.347751 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:48:44.347760 kernel: Console: colour dummy device 80x25 Jan 15 12:48:44.347767 kernel: printk: console [tty1] enabled Jan 15 12:48:44.347774 kernel: ACPI: Core revision 20230628 Jan 15 12:48:44.347781 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:48:44.347788 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:48:44.347795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:48:44.347802 kernel: landlock: Up and running. Jan 15 12:48:44.347809 kernel: SELinux: Initializing. Jan 15 12:48:44.347816 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.347823 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.347832 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:44.347839 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:48:44.347846 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:48:44.347853 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:48:44.347860 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:48:44.347867 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:48:44.347874 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:48:44.347887 kernel: Remapping and enabling EFI services. Jan 15 12:48:44.347895 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:48:44.347902 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:48:44.347909 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:48:44.347918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:48:44.347925 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:48:44.347933 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:48:44.347940 kernel: SMP: Total of 2 processors activated. Jan 15 12:48:44.347947 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:48:44.347956 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:48:44.347964 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:48:44.347971 kernel: CPU features: detected: CRC32 instructions Jan 15 12:48:44.347978 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:48:44.347985 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:48:44.347993 kernel: CPU features: detected: Privileged Access Never Jan 15 12:48:44.348000 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:48:44.348007 kernel: alternatives: applying system-wide alternatives Jan 15 12:48:44.348015 kernel: devtmpfs: initialized Jan 15 12:48:44.348024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:48:44.348032 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:48:44.348039 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:48:44.348060 kernel: SMBIOS 3.1.0 present. Jan 15 12:48:44.348068 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:48:44.348075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:48:44.348083 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:48:44.348090 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:48:44.348098 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:48:44.348107 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:48:44.348114 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:48:44.348122 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:48:44.348129 kernel: cpuidle: using governor menu Jan 15 12:48:44.348136 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:48:44.348144 kernel: ASID allocator initialised with 32768 entries Jan 15 12:48:44.348152 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:48:44.348159 kernel: Serial: AMBA PL011 UART driver Jan 15 12:48:44.348166 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:48:44.348175 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:48:44.348182 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:48:44.348190 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:48:44.348197 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:48:44.348205 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:48:44.348212 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:48:44.348220 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:48:44.348227 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:48:44.348235 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:48:44.348243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:48:44.348251 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:48:44.348258 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:48:44.348266 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:48:44.348273 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:48:44.348280 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:48:44.348288 kernel: ACPI: Interpreter enabled Jan 15 12:48:44.348295 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:48:44.348302 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:48:44.348311 kernel: printk: console [ttyAMA0] enabled Jan 15 12:48:44.348318 kernel: printk: bootconsole [pl11] disabled Jan 15 12:48:44.348326 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:48:44.348333 kernel: iommu: Default domain type: Translated Jan 15 12:48:44.348340 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:48:44.348348 kernel: efivars: Registered efivars operations Jan 15 12:48:44.348355 kernel: vgaarb: loaded Jan 15 12:48:44.348393 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:48:44.348400 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:48:44.348410 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:48:44.348418 kernel: pnp: PnP ACPI init Jan 15 12:48:44.348425 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:48:44.348432 kernel: NET: Registered PF_INET protocol family Jan 15 12:48:44.348440 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:48:44.348447 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:48:44.348454 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:48:44.348462 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:48:44.348469 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:48:44.348481 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:48:44.348489 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.348496 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:48:44.348504 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:48:44.348511 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:48:44.348518 kernel: kvm [1]: HYP mode not available Jan 15 12:48:44.348525 kernel: Initialise system trusted keyrings Jan 15 12:48:44.348533 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:48:44.348540 kernel: Key type asymmetric registered Jan 15 12:48:44.348549 kernel: Asymmetric key parser 'x509' registered Jan 15 12:48:44.348556 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:48:44.348563 kernel: io scheduler mq-deadline registered Jan 15 12:48:44.348571 kernel: io scheduler kyber registered Jan 15 12:48:44.348578 kernel: io scheduler bfq registered Jan 15 12:48:44.348585 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:48:44.348592 kernel: thunder_xcv, ver 1.0 Jan 15 12:48:44.348600 kernel: thunder_bgx, ver 1.0 Jan 15 12:48:44.348607 kernel: nicpf, ver 1.0 Jan 15 12:48:44.348614 kernel: nicvf, ver 1.0 Jan 15 12:48:44.348749 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:48:44.348820 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:48:43 UTC (1736945323) Jan 15 12:48:44.348831 kernel: efifb: probing for efifb Jan 15 12:48:44.348838 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:48:44.348846 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:48:44.348853 kernel: efifb: scrolling: redraw Jan 15 12:48:44.348861 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:48:44.348871 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:48:44.348878 kernel: fb0: EFI VGA frame buffer device Jan 15 12:48:44.348886 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:48:44.348894 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:48:44.348901 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:48:44.348908 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:48:44.348916 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:48:44.348923 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:48:44.348931 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:48:44.348939 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:48:44.348947 kernel: Segment Routing with IPv6 Jan 15 12:48:44.348954 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:48:44.348962 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:48:44.348969 kernel: Key type dns_resolver registered Jan 15 12:48:44.348977 kernel: registered taskstats version 1 Jan 15 12:48:44.348984 kernel: Loading compiled-in X.509 certificates Jan 15 12:48:44.348991 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:48:44.348999 kernel: Key type .fscrypt registered Jan 15 12:48:44.349007 kernel: Key type fscrypt-provisioning registered Jan 15 12:48:44.349015 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:48:44.349022 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:48:44.349029 kernel: ima: No architecture policies found Jan 15 12:48:44.349037 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:48:44.349044 kernel: clk: Disabling unused clocks Jan 15 12:48:44.349051 kernel: Freeing unused kernel memory: 39360K Jan 15 12:48:44.349059 kernel: Run /init as init process Jan 15 12:48:44.349066 kernel: with arguments: Jan 15 12:48:44.349075 kernel: /init Jan 15 12:48:44.349082 kernel: with environment: Jan 15 12:48:44.349089 kernel: HOME=/ Jan 15 12:48:44.349096 kernel: TERM=linux Jan 15 12:48:44.349103 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:48:44.349113 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:48:44.349122 systemd[1]: Detected virtualization microsoft. Jan 15 12:48:44.349130 systemd[1]: Detected architecture arm64. Jan 15 12:48:44.349139 systemd[1]: Running in initrd. Jan 15 12:48:44.349147 systemd[1]: No hostname configured, using default hostname. Jan 15 12:48:44.349155 systemd[1]: Hostname set to . Jan 15 12:48:44.349163 systemd[1]: Initializing machine ID from random generator. Jan 15 12:48:44.349171 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:48:44.349179 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:44.349187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:44.349196 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:48:44.349205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:48:44.349213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:48:44.349221 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:48:44.349231 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:48:44.349239 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:48:44.349247 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:44.349257 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:44.349265 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:48:44.349272 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:48:44.349280 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:48:44.349288 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:48:44.349296 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:44.349304 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:44.349312 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:48:44.349320 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:48:44.349329 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:44.349337 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:44.349345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:44.349353 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:48:44.354512 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:48:44.354548 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:48:44.354566 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:48:44.354581 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:48:44.354597 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:48:44.354624 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:48:44.354679 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:48:44.354701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:44.354710 systemd-journald[217]: Journal started Jan 15 12:48:44.354732 systemd-journald[217]: Runtime Journal (/run/log/journal/d615df8a5b214f81a712e3a4a82b8a37) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:48:44.355209 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:48:44.369655 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:48:44.380647 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:44.405901 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:48:44.405928 kernel: Bridge firewalling registered Jan 15 12:48:44.398015 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:48:44.398452 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:44.413281 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:48:44.422353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:44.435577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:44.459998 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:44.469550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:48:44.496760 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:48:44.521584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:48:44.529175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:44.543803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:44.550508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:48:44.568384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:44.598723 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:48:44.607570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:48:44.631892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:48:44.648537 dracut-cmdline[251]: dracut-dracut-053 Jan 15 12:48:44.659812 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:48:44.655703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:44.672857 systemd-resolved[252]: Positive Trust Anchors: Jan 15 12:48:44.672868 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:48:44.672899 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:48:44.675072 systemd-resolved[252]: Defaulting to hostname 'linux'. Jan 15 12:48:44.694589 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:48:44.717590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:44.802382 kernel: SCSI subsystem initialized Jan 15 12:48:44.811383 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:48:44.821385 kernel: iscsi: registered transport (tcp) Jan 15 12:48:44.841451 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:48:44.841513 kernel: QLogic iSCSI HBA Driver Jan 15 12:48:44.883873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:44.903552 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:48:44.936026 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:48:44.936068 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:48:44.936418 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:48:44.993387 kernel: raid6: neonx8 gen() 15741 MB/s Jan 15 12:48:45.013375 kernel: raid6: neonx4 gen() 15626 MB/s Jan 15 12:48:45.033378 kernel: raid6: neonx2 gen() 13246 MB/s Jan 15 12:48:45.054379 kernel: raid6: neonx1 gen() 10486 MB/s Jan 15 12:48:45.074379 kernel: raid6: int64x8 gen() 6956 MB/s Jan 15 12:48:45.094373 kernel: raid6: int64x4 gen() 7341 MB/s Jan 15 12:48:45.115377 kernel: raid6: int64x2 gen() 6127 MB/s Jan 15 12:48:45.139322 kernel: raid6: int64x1 gen() 5058 MB/s Jan 15 12:48:45.139350 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Jan 15 12:48:45.164127 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Jan 15 12:48:45.164143 kernel: raid6: using neon recovery algorithm Jan 15 12:48:45.176860 kernel: xor: measuring software checksum speed Jan 15 12:48:45.176891 kernel: 8regs : 19702 MB/sec Jan 15 12:48:45.180606 kernel: 32regs : 19622 MB/sec Jan 15 12:48:45.184197 kernel: arm64_neon : 26927 MB/sec Jan 15 12:48:45.188824 kernel: xor: using function: arm64_neon (26927 MB/sec) Jan 15 12:48:45.240425 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:48:45.252424 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:45.270518 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:45.295355 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 15 12:48:45.301028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:45.320541 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:48:45.350384 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 15 12:48:45.380469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:45.403700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:48:45.447397 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:45.472103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:48:45.500896 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:45.513671 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:45.530358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:45.542467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:48:45.568415 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:48:45.568626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:48:45.613129 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:48:45.613160 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:48:45.613171 kernel: PTP clock support registered Jan 15 12:48:45.613180 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:48:45.594950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:45.642176 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:48:45.642200 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:48:45.642218 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:48:45.595103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:45.477409 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:48:45.489591 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:48:45.489610 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:48:45.489618 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:48:45.489629 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:48:45.489637 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:48:45.489646 systemd-journald[217]: Time jumped backwards, rotating. Jan 15 12:48:45.489683 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:48:45.637639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:45.524487 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:48:45.661029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:45.545091 kernel: scsi host1: storvsc_host_t Jan 15 12:48:45.661255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.570691 kernel: scsi host0: storvsc_host_t Jan 15 12:48:45.570862 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:48:45.570893 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:48:45.462240 systemd-resolved[252]: Clock change detected. Flushing caches. Jan 15 12:48:45.498465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.518387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.537489 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:45.557186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.635464 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: VF slot 1 added Jan 15 12:48:45.584591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:45.655199 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:48:45.584657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.596935 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.678677 kernel: hv_pci 30b0463b-1ae6-448a-9ff8-e358d0752b6d: PCI VMBus probing: Using version 0x10004 Jan 15 12:48:45.785263 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 12:48:45.785397 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:48:45.785407 kernel: hv_pci 30b0463b-1ae6-448a-9ff8-e358d0752b6d: PCI host bridge to bus 1ae6:00 Jan 15 12:48:45.785500 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:48:45.785587 kernel: pci_bus 1ae6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:48:45.785687 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:48:45.785777 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:48:45.785859 kernel: pci_bus 1ae6:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:48:45.785933 kernel: pci 1ae6:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:48:45.786073 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 12:48:45.786160 kernel: pci 1ae6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:45.786244 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:48:45.786325 kernel: pci 1ae6:00:02.0: enabling Extended Tags Jan 15 12:48:45.786408 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:48:45.786489 kernel: pci 1ae6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1ae6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:48:45.786570 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:45.786580 kernel: pci_bus 1ae6:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:48:45.786654 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 12:48:45.786737 kernel: pci 1ae6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:48:45.648220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:48:45.687426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:45.717733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:48:45.835854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:45.860452 kernel: mlx5_core 1ae6:00:02.0: enabling device (0000 -> 0002) Jan 15 12:48:46.104656 kernel: mlx5_core 1ae6:00:02.0: firmware version: 16.30.1284 Jan 15 12:48:46.104878 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: VF registering: eth1 Jan 15 12:48:46.104973 kernel: mlx5_core 1ae6:00:02.0 eth1: joined to eth0 Jan 15 12:48:46.106217 kernel: mlx5_core 1ae6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:48:46.115021 kernel: mlx5_core 1ae6:00:02.0 enP6886s1: renamed from eth1 Jan 15 12:48:46.339608 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:48:46.367318 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (488) Jan 15 12:48:46.367358 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Jan 15 12:48:46.397288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:48:46.409554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:48:46.420897 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:48:46.450298 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:48:46.474045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:46.480021 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:46.489026 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:46.562438 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:48:46.614317 (udev-worker)[483]: sda9: Failed to create/update device symlink '/dev/disk/by-partlabel/ROOT', ignoring: No such file or directory Jan 15 12:48:47.489821 disk-uuid[606]: The operation has completed successfully. Jan 15 12:48:47.494932 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:48:47.548495 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:48:47.548609 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:48:47.582217 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:48:47.596112 sh[719]: Success Jan 15 12:48:47.637045 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:48:47.825464 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:48:47.835119 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:48:47.844377 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:48:47.874056 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:48:47.874095 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:47.881071 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:48:47.886292 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:48:47.890686 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:48:48.184305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:48:48.189715 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:48:48.210308 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:48:48.218207 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:48:48.258123 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:48.258182 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:48.258192 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:48.279056 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:48.293912 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:48:48.299209 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:48.305808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:48:48.320448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:48:48.345576 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:48.369243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:48:48.394505 systemd-networkd[903]: lo: Link UP Jan 15 12:48:48.394517 systemd-networkd[903]: lo: Gained carrier Jan 15 12:48:48.396493 systemd-networkd[903]: Enumeration completed Jan 15 12:48:48.398865 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:48.398869 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:48:48.399173 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:48:48.409074 systemd[1]: Reached target network.target - Network. Jan 15 12:48:48.497017 kernel: mlx5_core 1ae6:00:02.0 enP6886s1: Link up Jan 15 12:48:48.545018 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: Data path switched to VF: enP6886s1 Jan 15 12:48:48.545934 systemd-networkd[903]: enP6886s1: Link UP Jan 15 12:48:48.546196 systemd-networkd[903]: eth0: Link UP Jan 15 12:48:48.546549 systemd-networkd[903]: eth0: Gained carrier Jan 15 12:48:48.546558 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:48:48.573282 systemd-networkd[903]: enP6886s1: Gained carrier Jan 15 12:48:48.585034 systemd-networkd[903]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:48:49.260128 ignition[889]: Ignition 2.19.0 Jan 15 12:48:49.260140 ignition[889]: Stage: fetch-offline Jan 15 12:48:49.264927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:49.260177 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.281229 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:48:49.260185 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.260276 ignition[889]: parsed url from cmdline: "" Jan 15 12:48:49.260279 ignition[889]: no config URL provided Jan 15 12:48:49.260283 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:49.260290 ignition[889]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:49.260294 ignition[889]: failed to fetch config: resource requires networking Jan 15 12:48:49.260479 ignition[889]: Ignition finished successfully Jan 15 12:48:49.295291 ignition[911]: Ignition 2.19.0 Jan 15 12:48:49.295838 ignition[911]: Stage: fetch Jan 15 12:48:49.296072 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.296083 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.296214 ignition[911]: parsed url from cmdline: "" Jan 15 12:48:49.296218 ignition[911]: no config URL provided Jan 15 12:48:49.296222 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:48:49.296230 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:48:49.296262 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:48:49.386687 ignition[911]: GET result: OK Jan 15 12:48:49.386807 ignition[911]: config has been read from IMDS userdata Jan 15 12:48:49.386883 ignition[911]: parsing config with SHA512: ce01eba26a1f18f21a02b458829b6c91dbfdfa474cb4de2eed53561ce8b6f35744866081a6d18451bfa249de957ab5fbbc26ac5f5068e39ddf0ed98d57e73dd9 Jan 15 12:48:49.391132 unknown[911]: fetched base config from "system" Jan 15 12:48:49.391557 ignition[911]: fetch: fetch complete Jan 15 12:48:49.391138 unknown[911]: fetched base config from "system" Jan 15 12:48:49.391562 ignition[911]: fetch: fetch passed Jan 15 12:48:49.391143 unknown[911]: fetched user config from "azure" Jan 15 12:48:49.391602 ignition[911]: Ignition finished successfully Jan 15 12:48:49.396212 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:48:49.423151 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:48:49.445927 ignition[917]: Ignition 2.19.0 Jan 15 12:48:49.445939 ignition[917]: Stage: kargs Jan 15 12:48:49.450933 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:48:49.447578 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.447597 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.448618 ignition[917]: kargs: kargs passed Jan 15 12:48:49.472338 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:48:49.448670 ignition[917]: Ignition finished successfully Jan 15 12:48:49.493062 ignition[924]: Ignition 2.19.0 Jan 15 12:48:49.498696 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:48:49.493068 ignition[924]: Stage: disks Jan 15 12:48:49.508411 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:49.493277 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:49.521137 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:48:49.493286 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:49.531141 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:48:49.494218 ignition[924]: disks: disks passed Jan 15 12:48:49.542688 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:48:49.494265 ignition[924]: Ignition finished successfully Jan 15 12:48:49.553045 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:48:49.577271 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:48:49.649460 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:48:49.658420 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:48:49.675149 systemd-networkd[903]: enP6886s1: Gained IPv6LL Jan 15 12:48:49.679208 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:48:49.737023 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:48:49.737261 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:48:49.742324 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:48:49.788112 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:49.796146 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:48:49.806239 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:48:49.841578 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Jan 15 12:48:49.841603 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:49.834273 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:48:49.872493 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:49.872518 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:49.834314 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:49.855918 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:48:49.896309 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:49.897191 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:48:49.903979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:49.931136 systemd-networkd[903]: eth0: Gained IPv6LL Jan 15 12:48:50.361461 coreos-metadata[945]: Jan 15 12:48:50.361 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:48:50.372438 coreos-metadata[945]: Jan 15 12:48:50.372 INFO Fetch successful Jan 15 12:48:50.377575 coreos-metadata[945]: Jan 15 12:48:50.372 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:48:50.390450 coreos-metadata[945]: Jan 15 12:48:50.390 INFO Fetch successful Jan 15 12:48:50.406608 coreos-metadata[945]: Jan 15 12:48:50.406 INFO wrote hostname ci-4081.3.0-a-b8bd16053a to /sysroot/etc/hostname Jan 15 12:48:50.416042 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:50.732026 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:48:50.775764 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:48:50.810723 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:48:50.817147 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:48:51.667523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:51.683241 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:48:51.703060 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:51.703194 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:48:51.711847 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:48:51.736064 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:48:51.751144 ignition[1061]: INFO : Ignition 2.19.0 Jan 15 12:48:51.751144 ignition[1061]: INFO : Stage: mount Jan 15 12:48:51.761013 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.761013 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.761013 ignition[1061]: INFO : mount: mount passed Jan 15 12:48:51.761013 ignition[1061]: INFO : Ignition finished successfully Jan 15 12:48:51.759340 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:48:51.781237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:48:51.801207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:48:51.835521 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1073) Jan 15 12:48:51.835543 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:48:51.835553 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:48:51.846076 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:48:51.853009 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:48:51.854496 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:48:51.888075 ignition[1090]: INFO : Ignition 2.19.0 Jan 15 12:48:51.888075 ignition[1090]: INFO : Stage: files Jan 15 12:48:51.898108 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:51.898108 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:51.898108 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:48:51.918901 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:48:51.918901 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:48:51.966151 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:48:51.974149 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:48:51.974149 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:48:51.966549 unknown[1090]: wrote ssh authorized keys file for user: core Jan 15 12:48:51.995691 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:51.995691 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:48:52.044409 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 12:48:52.175960 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.204393 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 15 12:48:52.642219 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 12:48:52.855840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 15 12:48:52.855840 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 12:48:52.891267 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:48:52.902063 ignition[1090]: INFO : files: files passed Jan 15 12:48:52.902063 ignition[1090]: INFO : Ignition finished successfully Jan 15 12:48:52.920298 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:48:52.962677 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:48:52.979659 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:48:52.998921 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:48:52.999043 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:48:53.035824 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:53.035824 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:53.054688 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:48:53.068046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:53.085849 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:48:53.105521 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:48:53.141911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:48:53.144081 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:48:53.155116 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:48:53.167613 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:48:53.179043 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:48:53.195288 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:48:53.221329 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:53.241224 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:48:53.263735 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:48:53.282975 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:53.297572 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:48:53.311043 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:48:53.311231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:48:53.332831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:48:53.345618 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:48:53.356883 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:48:53.364752 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:48:53.377921 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:48:53.391352 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:48:53.404276 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:48:53.423899 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:48:53.436203 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:48:53.448435 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:48:53.457866 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:48:53.458041 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:48:53.469792 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:48:53.482975 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:53.496387 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:48:53.496675 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:53.512117 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:48:53.512268 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:48:53.525374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:48:53.525506 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:48:53.547169 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:48:53.547287 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:48:53.559472 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:48:53.559598 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:48:53.608252 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:48:53.621747 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:48:53.637083 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:48:53.637262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:53.651809 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:48:53.674866 ignition[1143]: INFO : Ignition 2.19.0 Jan 15 12:48:53.674866 ignition[1143]: INFO : Stage: umount Jan 15 12:48:53.651924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:48:53.696269 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:48:53.696269 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:48:53.696269 ignition[1143]: INFO : umount: umount passed Jan 15 12:48:53.696269 ignition[1143]: INFO : Ignition finished successfully Jan 15 12:48:53.702139 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:48:53.702252 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:48:53.716594 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:48:53.717159 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:48:53.717245 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:48:53.731375 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:48:53.731441 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:48:53.748242 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:48:53.748312 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:48:53.759441 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:48:53.759498 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:48:53.770260 systemd[1]: Stopped target network.target - Network. Jan 15 12:48:53.781525 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:48:53.781601 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:48:53.796731 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:48:53.808395 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:48:53.820051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:53.832553 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:48:53.844257 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:48:53.855586 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:48:53.855631 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:48:53.866695 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:48:53.866755 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:48:53.878646 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:48:53.878745 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:48:53.884722 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:48:53.884780 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:48:53.897110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:48:53.913772 systemd-networkd[903]: eth0: DHCPv6 lease lost Jan 15 12:48:53.913891 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:48:53.928723 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:48:53.928823 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:48:53.934723 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:48:53.934823 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:48:53.951372 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:48:53.951614 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:48:53.970709 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:48:53.970787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:53.981807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:48:53.981885 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:48:54.011248 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:48:54.177865 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: Data path switched from VF: enP6886s1 Jan 15 12:48:54.022396 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:48:54.022479 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:48:54.035728 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:48:54.035788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:54.046375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:48:54.046435 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:54.061155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:48:54.061217 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:48:54.075207 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:48:54.130963 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:48:54.131189 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:48:54.151411 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:48:54.151458 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:54.171808 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:48:54.171858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:54.178377 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:48:54.178428 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:48:54.197605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:48:54.197673 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:48:54.209709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:48:54.209768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:48:54.242739 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:48:54.263121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:48:54.263209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:48:54.278945 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 12:48:54.279035 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:48:54.292320 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:48:54.292385 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:54.307151 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:48:54.307214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:48:54.321209 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:48:54.321297 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:48:54.333198 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:48:54.339895 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:48:54.353256 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:48:54.507213 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 15 12:48:54.371325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:48:54.410551 systemd[1]: Switching root. Jan 15 12:48:54.516598 systemd-journald[217]: Journal stopped Jan 15 12:48:59.352718 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 12:48:59.352741 kernel: SELinux: policy capability open_perms=1 Jan 15 12:48:59.352751 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 12:48:59.352759 kernel: SELinux: policy capability always_check_network=0 Jan 15 12:48:59.352768 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 12:48:59.352776 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 12:48:59.352785 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 12:48:59.352793 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 12:48:59.352801 kernel: audit: type=1403 audit(1736945336.328:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 12:48:59.352811 systemd[1]: Successfully loaded SELinux policy in 162.400ms. Jan 15 12:48:59.352823 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.530ms. Jan 15 12:48:59.352833 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:48:59.352842 systemd[1]: Detected virtualization microsoft. Jan 15 12:48:59.352850 systemd[1]: Detected architecture arm64. Jan 15 12:48:59.352859 systemd[1]: Detected first boot. Jan 15 12:48:59.352873 systemd[1]: Hostname set to . Jan 15 12:48:59.352883 systemd[1]: Initializing machine ID from random generator. Jan 15 12:48:59.352892 zram_generator::config[1185]: No configuration found. Jan 15 12:48:59.352901 systemd[1]: Populated /etc with preset unit settings. Jan 15 12:48:59.352910 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 12:48:59.352919 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 12:48:59.352928 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 12:48:59.352939 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 12:48:59.352949 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 12:48:59.352958 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 12:48:59.352967 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 12:48:59.352976 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 12:48:59.352985 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 12:48:59.359444 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 12:48:59.359464 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 12:48:59.359473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:48:59.359484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:48:59.359493 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 12:48:59.359502 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 12:48:59.359512 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 12:48:59.359523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:48:59.359533 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 15 12:48:59.359545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:48:59.359554 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 12:48:59.359563 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 12:48:59.359575 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 12:48:59.359585 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 12:48:59.359594 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:48:59.359604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:48:59.359613 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:48:59.359624 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:48:59.359633 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 12:48:59.359643 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 12:48:59.359652 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:48:59.359662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:48:59.359672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:48:59.359683 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 12:48:59.359693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 12:48:59.359702 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 12:48:59.359712 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 12:48:59.359722 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 12:48:59.359732 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 12:48:59.359742 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 12:48:59.359754 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 12:48:59.359764 systemd[1]: Reached target machines.target - Containers. Jan 15 12:48:59.359773 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 12:48:59.359783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:48:59.359793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:48:59.359802 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 12:48:59.359812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:48:59.359822 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:48:59.359833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:48:59.359842 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 12:48:59.359852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:48:59.359862 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 12:48:59.359871 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 12:48:59.359881 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 12:48:59.359891 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 12:48:59.359900 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 12:48:59.359911 kernel: fuse: init (API version 7.39) Jan 15 12:48:59.359920 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:48:59.359930 kernel: loop: module loaded Jan 15 12:48:59.359939 kernel: ACPI: bus type drm_connector registered Jan 15 12:48:59.359948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:48:59.359958 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 12:48:59.359968 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 12:48:59.364526 systemd-journald[1285]: Collecting audit messages is disabled. Jan 15 12:48:59.364557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:48:59.364568 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 12:48:59.364578 systemd-journald[1285]: Journal started Jan 15 12:48:59.364600 systemd-journald[1285]: Runtime Journal (/run/log/journal/d25d1bc3bee74f6c86168c5009ffdfa4) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:48:58.128050 systemd[1]: Queued start job for default target multi-user.target. Jan 15 12:48:59.368982 systemd[1]: Stopped verity-setup.service. Jan 15 12:48:58.346172 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 15 12:48:58.346581 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 12:48:58.346898 systemd[1]: systemd-journald.service: Consumed 3.081s CPU time. Jan 15 12:48:59.392923 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:48:59.392436 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 12:48:59.398978 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 12:48:59.406271 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 12:48:59.412353 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 12:48:59.419276 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 12:48:59.425953 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 12:48:59.431910 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 12:48:59.441659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:48:59.457690 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 12:48:59.457828 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 12:48:59.473642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:48:59.474532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:48:59.489666 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:48:59.491038 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:48:59.505647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:48:59.505824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:48:59.523662 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 12:48:59.523822 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 12:48:59.532588 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:48:59.534224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:48:59.540781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:48:59.547782 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 12:48:59.557035 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 12:48:59.564792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:48:59.581876 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 12:48:59.593520 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 12:48:59.603146 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 12:48:59.612464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 12:48:59.612512 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:48:59.619502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 12:48:59.630233 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 12:48:59.640932 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 12:48:59.646857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:48:59.651377 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 12:48:59.660066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 12:48:59.668429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:48:59.670304 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 12:48:59.681394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:48:59.684241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:48:59.704324 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 12:48:59.719284 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:48:59.738138 systemd-journald[1285]: Time spent on flushing to /var/log/journal/d25d1bc3bee74f6c86168c5009ffdfa4 is 62.892ms for 902 entries. Jan 15 12:48:59.738138 systemd-journald[1285]: System Journal (/var/log/journal/d25d1bc3bee74f6c86168c5009ffdfa4) is 11.8M, max 2.6G, 2.6G free. Jan 15 12:48:59.903500 systemd-journald[1285]: Received client request to flush runtime journal. Jan 15 12:48:59.903546 kernel: loop0: detected capacity change from 0 to 31320 Jan 15 12:48:59.903566 systemd-journald[1285]: /var/log/journal/d25d1bc3bee74f6c86168c5009ffdfa4/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 15 12:48:59.903589 systemd-journald[1285]: Rotating system journal. Jan 15 12:48:59.729338 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 12:48:59.747279 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 12:48:59.759227 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 12:48:59.766596 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 12:48:59.781100 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 12:48:59.817176 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 12:48:59.823893 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 15 12:48:59.830598 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 12:48:59.852711 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 15 12:48:59.852722 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 15 12:48:59.860149 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:48:59.884229 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 12:48:59.907567 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 12:48:59.920775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:48:59.930587 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 12:48:59.933041 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 12:49:00.064205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 12:49:00.078202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:49:00.096763 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jan 15 12:49:00.096783 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jan 15 12:49:00.103101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:49:00.181039 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 12:49:00.231047 kernel: loop1: detected capacity change from 0 to 194512 Jan 15 12:49:00.289056 kernel: loop2: detected capacity change from 0 to 114432 Jan 15 12:49:00.656086 kernel: loop3: detected capacity change from 0 to 114328 Jan 15 12:49:00.926063 kernel: loop4: detected capacity change from 0 to 31320 Jan 15 12:49:00.934128 kernel: loop5: detected capacity change from 0 to 194512 Jan 15 12:49:00.943059 kernel: loop6: detected capacity change from 0 to 114432 Jan 15 12:49:00.951105 kernel: loop7: detected capacity change from 0 to 114328 Jan 15 12:49:00.954845 (sd-merge)[1349]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 15 12:49:00.955312 (sd-merge)[1349]: Merged extensions into '/usr'. Jan 15 12:49:00.959231 systemd[1]: Reloading requested from client PID 1319 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 12:49:00.959505 systemd[1]: Reloading... Jan 15 12:49:01.027074 zram_generator::config[1372]: No configuration found. Jan 15 12:49:01.207247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:01.280096 systemd[1]: Reloading finished in 320 ms. Jan 15 12:49:01.331388 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 12:49:01.339970 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 12:49:01.360185 systemd[1]: Starting ensure-sysext.service... Jan 15 12:49:01.365508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:49:01.376244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:49:01.400058 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 12:49:01.400329 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 12:49:01.401063 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 12:49:01.401285 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Jan 15 12:49:01.401333 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Jan 15 12:49:01.404685 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:49:01.404825 systemd-tmpfiles[1432]: Skipping /boot Jan 15 12:49:01.409685 systemd-udevd[1433]: Using default interface naming scheme 'v255'. Jan 15 12:49:01.418669 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:49:01.418688 systemd-tmpfiles[1432]: Skipping /boot Jan 15 12:49:01.419226 systemd[1]: Reloading requested from client PID 1431 ('systemctl') (unit ensure-sysext.service)... Jan 15 12:49:01.419247 systemd[1]: Reloading... Jan 15 12:49:01.495036 zram_generator::config[1461]: No configuration found. Jan 15 12:49:01.690649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:01.755151 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 12:49:01.755243 kernel: hv_vmbus: registering driver hv_balloon Jan 15 12:49:01.779238 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 15 12:49:01.779720 systemd[1]: Reloading finished in 360 ms. Jan 15 12:49:01.792168 kernel: hv_vmbus: registering driver hyperv_fb Jan 15 12:49:01.792274 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 15 12:49:01.802482 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:49:01.821006 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 15 12:49:01.821097 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 15 12:49:01.821123 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 15 12:49:01.837769 kernel: Console: switching to colour dummy device 80x25 Jan 15 12:49:01.839179 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:49:01.850211 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:49:01.892139 systemd[1]: Finished ensure-sysext.service. Jan 15 12:49:01.923066 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1512) Jan 15 12:49:01.924678 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:49:01.958428 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 12:49:01.967534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:01.970195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:01.979223 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:49:01.991217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:02.008285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:02.016878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:02.020236 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 12:49:02.034318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:49:02.049205 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:49:02.055427 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 12:49:02.068644 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 12:49:02.078657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:02.092742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:02.093253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:02.101644 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:49:02.101788 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:49:02.108327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:02.108462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:02.118858 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:02.119310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:02.155159 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 12:49:02.163695 augenrules[1624]: No rules Jan 15 12:49:02.167300 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:49:02.176468 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 12:49:02.189601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:49:02.204251 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 12:49:02.214281 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 12:49:02.221304 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:49:02.221382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:49:02.225201 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 12:49:02.248571 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:02.249087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:02.267415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:02.279435 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 12:49:02.284028 lvm[1633]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:49:02.308656 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 12:49:02.330770 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 12:49:02.341420 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 12:49:02.350400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:49:02.367197 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 12:49:02.379768 lvm[1650]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:49:02.402880 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 12:49:02.430969 systemd-networkd[1609]: lo: Link UP Jan 15 12:49:02.430979 systemd-networkd[1609]: lo: Gained carrier Jan 15 12:49:02.436245 systemd-networkd[1609]: Enumeration completed Jan 15 12:49:02.436413 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:49:02.437089 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:02.437093 systemd-networkd[1609]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:02.443920 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 12:49:02.457174 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 12:49:02.463980 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 12:49:02.473658 systemd-resolved[1611]: Positive Trust Anchors: Jan 15 12:49:02.473675 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:49:02.473707 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:49:02.511036 kernel: mlx5_core 1ae6:00:02.0 enP6886s1: Link up Jan 15 12:49:02.512443 systemd-resolved[1611]: Using system hostname 'ci-4081.3.0-a-b8bd16053a'. Jan 15 12:49:02.530976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:02.546559 kernel: hv_netvsc 000d3ac5-af3c-000d-3ac5-af3c000d3ac5 eth0: Data path switched to VF: enP6886s1 Jan 15 12:49:02.547487 systemd-networkd[1609]: enP6886s1: Link UP Jan 15 12:49:02.547588 systemd-networkd[1609]: eth0: Link UP Jan 15 12:49:02.547591 systemd-networkd[1609]: eth0: Gained carrier Jan 15 12:49:02.547608 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:02.549773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:49:02.556841 systemd[1]: Reached target network.target - Network. Jan 15 12:49:02.562277 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:49:02.569153 systemd-networkd[1609]: enP6886s1: Gained carrier Jan 15 12:49:02.574065 systemd-networkd[1609]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:04.395242 systemd-networkd[1609]: eth0: Gained IPv6LL Jan 15 12:49:04.398217 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 12:49:04.406474 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 12:49:04.523195 systemd-networkd[1609]: enP6886s1: Gained IPv6LL Jan 15 12:49:05.053013 ldconfig[1314]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 12:49:05.067401 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 12:49:05.081198 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 12:49:05.095905 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 12:49:05.104598 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:49:05.112334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 12:49:05.120789 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 12:49:05.128079 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 12:49:05.134723 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 12:49:05.142086 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 12:49:05.150516 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 12:49:05.150560 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:49:05.157796 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:49:05.179010 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 12:49:05.186588 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 12:49:05.221781 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 12:49:05.228337 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 12:49:05.234621 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:49:05.240336 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:49:05.247322 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:49:05.247356 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:49:05.258284 systemd[1]: Starting chronyd.service - NTP client/server... Jan 15 12:49:05.267211 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 12:49:05.278719 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 12:49:05.292273 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 12:49:05.302231 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 12:49:05.315218 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 12:49:05.323864 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 12:49:05.326107 jq[1670]: false Jan 15 12:49:05.323920 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 15 12:49:05.328572 (chronyd)[1664]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 15 12:49:05.337730 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 15 12:49:05.345774 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 15 12:49:05.348202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:05.356273 KVP[1672]: KVP starting; pid is:1672 Jan 15 12:49:05.363960 chronyd[1677]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 15 12:49:05.364326 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 12:49:05.376064 kernel: hv_utils: KVP IC version 4.0 Jan 15 12:49:05.376180 KVP[1672]: KVP LIC Version: 3.1 Jan 15 12:49:05.378115 chronyd[1677]: Timezone right/UTC failed leap second check, ignoring Jan 15 12:49:05.378345 chronyd[1677]: Loaded seccomp filter (level 2) Jan 15 12:49:05.379551 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 12:49:05.387055 extend-filesystems[1671]: Found loop4 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found loop5 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found loop6 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found loop7 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda1 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda2 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda3 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found usr Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda4 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda6 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda7 Jan 15 12:49:05.387055 extend-filesystems[1671]: Found sda9 Jan 15 12:49:05.530598 extend-filesystems[1671]: Checking size of /dev/sda9 Jan 15 12:49:05.530598 extend-filesystems[1671]: Old size kept for /dev/sda9 Jan 15 12:49:05.530598 extend-filesystems[1671]: Found sr0 Jan 15 12:49:05.397496 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 12:49:05.465588 dbus-daemon[1669]: [system] SELinux support is enabled Jan 15 12:49:05.619093 coreos-metadata[1666]: Jan 15 12:49:05.585 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:49:05.619093 coreos-metadata[1666]: Jan 15 12:49:05.594 INFO Fetch successful Jan 15 12:49:05.619093 coreos-metadata[1666]: Jan 15 12:49:05.594 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 15 12:49:05.619093 coreos-metadata[1666]: Jan 15 12:49:05.600 INFO Fetch successful Jan 15 12:49:05.619093 coreos-metadata[1666]: Jan 15 12:49:05.600 INFO Fetching http://168.63.129.16/machine/51cce57e-2156-4376-ac12-3806b09c5ec1/69f7f142%2De1fd%2D4124%2Da5c8%2D0a80a2e07d6a.%5Fci%2D4081.3.0%2Da%2Db8bd16053a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 15 12:49:05.619093 coreos-metadata[1666]: Jan 15 12:49:05.603 INFO Fetch successful Jan 15 12:49:05.619093 coreos-metadata[1666]: Jan 15 12:49:05.603 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:49:05.442919 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 12:49:05.464885 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 12:49:05.490635 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 12:49:05.518383 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 12:49:05.518890 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 12:49:05.528371 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 12:49:05.622295 jq[1706]: true Jan 15 12:49:05.559261 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 12:49:05.601358 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 12:49:05.626965 update_engine[1702]: I20250115 12:49:05.626873 1702 main.cc:92] Flatcar Update Engine starting Jan 15 12:49:05.629854 systemd[1]: Started chronyd.service - NTP client/server. Jan 15 12:49:05.637550 coreos-metadata[1666]: Jan 15 12:49:05.636 INFO Fetch successful Jan 15 12:49:05.639411 update_engine[1702]: I20250115 12:49:05.639349 1702 update_check_scheduler.cc:74] Next update check in 4m45s Jan 15 12:49:05.641952 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 12:49:05.642261 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 12:49:05.642516 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 12:49:05.643079 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 12:49:05.656240 systemd-logind[1693]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 12:49:05.659178 systemd-logind[1693]: New seat seat0. Jan 15 12:49:05.662064 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1712) Jan 15 12:49:05.663442 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 12:49:05.663649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 12:49:05.684760 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 12:49:05.694764 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 12:49:05.724591 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 12:49:05.724778 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 12:49:05.753662 (ntainerd)[1744]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 12:49:05.763099 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 12:49:05.778769 dbus-daemon[1669]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 12:49:05.785859 systemd[1]: Started update-engine.service - Update Engine. Jan 15 12:49:05.791794 jq[1743]: true Jan 15 12:49:05.817969 tar[1735]: linux-arm64/helm Jan 15 12:49:05.822099 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 12:49:05.822315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 12:49:05.822440 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 12:49:05.833293 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 12:49:05.833416 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 12:49:05.852298 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 12:49:05.921486 bash[1783]: Updated "/home/core/.ssh/authorized_keys" Jan 15 12:49:05.927106 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 12:49:05.940136 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 12:49:06.143539 locksmithd[1779]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 12:49:06.345874 tar[1735]: linux-arm64/LICENSE Jan 15 12:49:06.346283 tar[1735]: linux-arm64/README.md Jan 15 12:49:06.369073 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 12:49:06.499925 containerd[1744]: time="2025-01-15T12:49:06.499096300Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 12:49:06.527916 sshd_keygen[1710]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 12:49:06.552845 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 12:49:06.563265 containerd[1744]: time="2025-01-15T12:49:06.563124940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567220 containerd[1744]: time="2025-01-15T12:49:06.567168980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567220 containerd[1744]: time="2025-01-15T12:49:06.567212300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 12:49:06.567351 containerd[1744]: time="2025-01-15T12:49:06.567229940Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567387060Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567412140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567469500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567482580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567643380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567659500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567671420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567680700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:06.567756 containerd[1744]: time="2025-01-15T12:49:06.567748540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:06.568481 containerd[1744]: time="2025-01-15T12:49:06.567925620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:06.568481 containerd[1744]: time="2025-01-15T12:49:06.568087500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:06.568481 containerd[1744]: time="2025-01-15T12:49:06.568103700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 12:49:06.568481 containerd[1744]: time="2025-01-15T12:49:06.568182420Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 12:49:06.568481 containerd[1744]: time="2025-01-15T12:49:06.568220660Z" level=info msg="metadata content store policy set" policy=shared Jan 15 12:49:06.570609 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 12:49:06.581365 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 15 12:49:06.596068 containerd[1744]: time="2025-01-15T12:49:06.595933700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 12:49:06.596068 containerd[1744]: time="2025-01-15T12:49:06.596065540Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 12:49:06.596191 containerd[1744]: time="2025-01-15T12:49:06.596089980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 12:49:06.596191 containerd[1744]: time="2025-01-15T12:49:06.596115660Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 12:49:06.596191 containerd[1744]: time="2025-01-15T12:49:06.596131060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 12:49:06.596329 containerd[1744]: time="2025-01-15T12:49:06.596299060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 12:49:06.596565 containerd[1744]: time="2025-01-15T12:49:06.596535540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596722020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596757020Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596773740Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596789660Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596802740Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596816420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596831620Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596847780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596860860Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596872980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596885940Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596905700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596919580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600098 containerd[1744]: time="2025-01-15T12:49:06.596931700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.596944980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.596956660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.596982580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597011660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597026620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597039820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597054620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597067140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597078940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597092420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597109820Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597142500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597157700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600525 containerd[1744]: time="2025-01-15T12:49:06.597168940Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597224860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597246140Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597256220Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597268660Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597278580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597292420Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597302940Z" level=info msg="NRI interface is disabled by configuration." Jan 15 12:49:06.600788 containerd[1744]: time="2025-01-15T12:49:06.597316220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.597611020Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.597667060Z" level=info msg="Connect containerd service" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.597703060Z" level=info msg="using legacy CRI server" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.597709580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.597792860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598417540Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598687260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598723700Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598805380Z" level=info msg="Start subscribing containerd event" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598838180Z" level=info msg="Start recovering state" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598894780Z" level=info msg="Start event monitor" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598904620Z" level=info msg="Start snapshots syncer" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598914380Z" level=info msg="Start cni network conf syncer for default" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598923020Z" level=info msg="Start streaming server" Jan 15 12:49:06.600938 containerd[1744]: time="2025-01-15T12:49:06.598976820Z" level=info msg="containerd successfully booted in 0.101815s" Jan 15 12:49:06.603241 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 12:49:06.614526 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 12:49:06.615073 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 12:49:06.629218 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 15 12:49:06.648325 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 12:49:06.680384 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 12:49:06.688596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:06.696364 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:06.704469 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 12:49:06.718418 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 15 12:49:06.728152 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 12:49:06.734316 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 12:49:06.742322 systemd[1]: Startup finished in 689ms (kernel) + 12.626s (initrd) + 10.566s (userspace) = 23.882s. Jan 15 12:49:07.016837 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:49:07.018816 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:49:07.029733 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 12:49:07.037655 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 12:49:07.043077 systemd-logind[1693]: New session 1 of user core. Jan 15 12:49:07.050596 systemd-logind[1693]: New session 2 of user core. Jan 15 12:49:07.057914 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 12:49:07.067470 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 12:49:07.091567 (systemd)[1841]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 12:49:07.238411 kubelet[1827]: E0115 12:49:07.238289 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:07.241658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:07.241910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:07.484591 systemd[1841]: Queued start job for default target default.target. Jan 15 12:49:07.493649 systemd[1841]: Created slice app.slice - User Application Slice. Jan 15 12:49:07.493677 systemd[1841]: Reached target paths.target - Paths. Jan 15 12:49:07.493690 systemd[1841]: Reached target timers.target - Timers. Jan 15 12:49:07.495095 systemd[1841]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 12:49:07.508274 systemd[1841]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 12:49:07.508411 systemd[1841]: Reached target sockets.target - Sockets. Jan 15 12:49:07.508426 systemd[1841]: Reached target basic.target - Basic System. Jan 15 12:49:07.508474 systemd[1841]: Reached target default.target - Main User Target. Jan 15 12:49:07.508505 systemd[1841]: Startup finished in 408ms. Jan 15 12:49:07.508708 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 12:49:07.516238 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 12:49:07.516965 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 12:49:08.386159 waagent[1818]: 2025-01-15T12:49:08.386055Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 15 12:49:08.392383 waagent[1818]: 2025-01-15T12:49:08.392305Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 15 12:49:08.397346 waagent[1818]: 2025-01-15T12:49:08.397285Z INFO Daemon Daemon Python: 3.11.9 Jan 15 12:49:08.402230 waagent[1818]: 2025-01-15T12:49:08.401916Z INFO Daemon Daemon Run daemon Jan 15 12:49:08.406562 waagent[1818]: 2025-01-15T12:49:08.406505Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 15 12:49:08.415582 waagent[1818]: 2025-01-15T12:49:08.415501Z INFO Daemon Daemon Using waagent for provisioning Jan 15 12:49:08.421144 waagent[1818]: 2025-01-15T12:49:08.421087Z INFO Daemon Daemon Activate resource disk Jan 15 12:49:08.426074 waagent[1818]: 2025-01-15T12:49:08.425964Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 15 12:49:08.437652 waagent[1818]: 2025-01-15T12:49:08.437585Z INFO Daemon Daemon Found device: None Jan 15 12:49:08.442393 waagent[1818]: 2025-01-15T12:49:08.442338Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 15 12:49:08.451129 waagent[1818]: 2025-01-15T12:49:08.451068Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 15 12:49:08.464320 waagent[1818]: 2025-01-15T12:49:08.464260Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:49:08.471368 waagent[1818]: 2025-01-15T12:49:08.471310Z INFO Daemon Daemon Running default provisioning handler Jan 15 12:49:08.483418 waagent[1818]: 2025-01-15T12:49:08.483339Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 15 12:49:08.498210 waagent[1818]: 2025-01-15T12:49:08.498140Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 15 12:49:08.509357 waagent[1818]: 2025-01-15T12:49:08.509291Z INFO Daemon Daemon cloud-init is enabled: False Jan 15 12:49:08.514791 waagent[1818]: 2025-01-15T12:49:08.514735Z INFO Daemon Daemon Copying ovf-env.xml Jan 15 12:49:08.702085 waagent[1818]: 2025-01-15T12:49:08.699712Z INFO Daemon Daemon Successfully mounted dvd Jan 15 12:49:08.714897 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 15 12:49:08.717326 waagent[1818]: 2025-01-15T12:49:08.717255Z INFO Daemon Daemon Detect protocol endpoint Jan 15 12:49:08.723194 waagent[1818]: 2025-01-15T12:49:08.723123Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:49:08.729247 waagent[1818]: 2025-01-15T12:49:08.729189Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 15 12:49:08.735995 waagent[1818]: 2025-01-15T12:49:08.735937Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 15 12:49:08.741862 waagent[1818]: 2025-01-15T12:49:08.741805Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 15 12:49:08.747262 waagent[1818]: 2025-01-15T12:49:08.747210Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 15 12:49:08.798771 waagent[1818]: 2025-01-15T12:49:08.798722Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 15 12:49:08.805759 waagent[1818]: 2025-01-15T12:49:08.805728Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 15 12:49:08.811220 waagent[1818]: 2025-01-15T12:49:08.811166Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 15 12:49:09.128208 waagent[1818]: 2025-01-15T12:49:09.128057Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 15 12:49:09.134849 waagent[1818]: 2025-01-15T12:49:09.134780Z INFO Daemon Daemon Forcing an update of the goal state. Jan 15 12:49:09.144136 waagent[1818]: 2025-01-15T12:49:09.144080Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:49:09.167812 waagent[1818]: 2025-01-15T12:49:09.167761Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 15 12:49:09.173777 waagent[1818]: 2025-01-15T12:49:09.173728Z INFO Daemon Jan 15 12:49:09.176879 waagent[1818]: 2025-01-15T12:49:09.176835Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 20fceb7c-d971-422f-b546-034e1c2137c6 eTag: 17940263971345564631 source: Fabric] Jan 15 12:49:09.188867 waagent[1818]: 2025-01-15T12:49:09.188817Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 15 12:49:09.196893 waagent[1818]: 2025-01-15T12:49:09.196846Z INFO Daemon Jan 15 12:49:09.199770 waagent[1818]: 2025-01-15T12:49:09.199722Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:49:09.211572 waagent[1818]: 2025-01-15T12:49:09.211535Z INFO Daemon Daemon Downloading artifacts profile blob Jan 15 12:49:09.310704 waagent[1818]: 2025-01-15T12:49:09.310608Z INFO Daemon Downloaded certificate {'thumbprint': 'DE406D3059EE853ECF82699BEEB30DB224ED5794', 'hasPrivateKey': False} Jan 15 12:49:09.321235 waagent[1818]: 2025-01-15T12:49:09.321181Z INFO Daemon Downloaded certificate {'thumbprint': '49FC92B084EAF845A6BAAF5FB87BB1C67ED62E72', 'hasPrivateKey': True} Jan 15 12:49:09.331455 waagent[1818]: 2025-01-15T12:49:09.331401Z INFO Daemon Fetch goal state completed Jan 15 12:49:09.343519 waagent[1818]: 2025-01-15T12:49:09.343468Z INFO Daemon Daemon Starting provisioning Jan 15 12:49:09.349156 waagent[1818]: 2025-01-15T12:49:09.349093Z INFO Daemon Daemon Handle ovf-env.xml. Jan 15 12:49:09.354359 waagent[1818]: 2025-01-15T12:49:09.354303Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-b8bd16053a] Jan 15 12:49:09.379749 waagent[1818]: 2025-01-15T12:49:09.379667Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-b8bd16053a] Jan 15 12:49:09.387341 waagent[1818]: 2025-01-15T12:49:09.387267Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 15 12:49:09.394214 waagent[1818]: 2025-01-15T12:49:09.394152Z INFO Daemon Daemon Primary interface is [eth0] Jan 15 12:49:09.439205 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:09.439214 systemd-networkd[1609]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:09.439241 systemd-networkd[1609]: eth0: DHCP lease lost Jan 15 12:49:09.440416 waagent[1818]: 2025-01-15T12:49:09.440326Z INFO Daemon Daemon Create user account if not exists Jan 15 12:49:09.445951 systemd-networkd[1609]: eth0: DHCPv6 lease lost Jan 15 12:49:09.446718 waagent[1818]: 2025-01-15T12:49:09.446643Z INFO Daemon Daemon User core already exists, skip useradd Jan 15 12:49:09.452824 waagent[1818]: 2025-01-15T12:49:09.452758Z INFO Daemon Daemon Configure sudoer Jan 15 12:49:09.457825 waagent[1818]: 2025-01-15T12:49:09.457763Z INFO Daemon Daemon Configure sshd Jan 15 12:49:09.463552 waagent[1818]: 2025-01-15T12:49:09.463480Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 15 12:49:09.478564 waagent[1818]: 2025-01-15T12:49:09.478488Z INFO Daemon Daemon Deploy ssh public key. Jan 15 12:49:09.484048 systemd-networkd[1609]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:10.634857 waagent[1818]: 2025-01-15T12:49:10.629754Z INFO Daemon Daemon Provisioning complete Jan 15 12:49:10.650829 waagent[1818]: 2025-01-15T12:49:10.650776Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 15 12:49:10.658285 waagent[1818]: 2025-01-15T12:49:10.658205Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 15 12:49:10.669507 waagent[1818]: 2025-01-15T12:49:10.669435Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 15 12:49:10.806890 waagent[1902]: 2025-01-15T12:49:10.806806Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 15 12:49:10.807416 waagent[1902]: 2025-01-15T12:49:10.807348Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 15 12:49:10.807585 waagent[1902]: 2025-01-15T12:49:10.807548Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 15 12:49:10.927776 waagent[1902]: 2025-01-15T12:49:10.927643Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 15 12:49:10.928134 waagent[1902]: 2025-01-15T12:49:10.928089Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:10.928285 waagent[1902]: 2025-01-15T12:49:10.928249Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:10.937014 waagent[1902]: 2025-01-15T12:49:10.936921Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:49:10.944199 waagent[1902]: 2025-01-15T12:49:10.944141Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 15 12:49:10.944904 waagent[1902]: 2025-01-15T12:49:10.944858Z INFO ExtHandler Jan 15 12:49:10.945157 waagent[1902]: 2025-01-15T12:49:10.945118Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9b4038f1-20f9-4ad9-9569-cc2804509462 eTag: 17940263971345564631 source: Fabric] Jan 15 12:49:10.945575 waagent[1902]: 2025-01-15T12:49:10.945535Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 12:49:10.946357 waagent[1902]: 2025-01-15T12:49:10.946312Z INFO ExtHandler Jan 15 12:49:10.946533 waagent[1902]: 2025-01-15T12:49:10.946499Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:49:10.950675 waagent[1902]: 2025-01-15T12:49:10.950639Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 12:49:11.038684 waagent[1902]: 2025-01-15T12:49:11.038585Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DE406D3059EE853ECF82699BEEB30DB224ED5794', 'hasPrivateKey': False} Jan 15 12:49:11.039200 waagent[1902]: 2025-01-15T12:49:11.039149Z INFO ExtHandler Downloaded certificate {'thumbprint': '49FC92B084EAF845A6BAAF5FB87BB1C67ED62E72', 'hasPrivateKey': True} Jan 15 12:49:11.039625 waagent[1902]: 2025-01-15T12:49:11.039581Z INFO ExtHandler Fetch goal state completed Jan 15 12:49:11.057469 waagent[1902]: 2025-01-15T12:49:11.057399Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1902 Jan 15 12:49:11.057630 waagent[1902]: 2025-01-15T12:49:11.057592Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 15 12:49:11.059428 waagent[1902]: 2025-01-15T12:49:11.059379Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 15 12:49:11.059835 waagent[1902]: 2025-01-15T12:49:11.059781Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 15 12:49:11.212774 waagent[1902]: 2025-01-15T12:49:11.212675Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 15 12:49:11.212913 waagent[1902]: 2025-01-15T12:49:11.212872Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 15 12:49:11.219490 waagent[1902]: 2025-01-15T12:49:11.219447Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 15 12:49:11.226461 systemd[1]: Reloading requested from client PID 1917 ('systemctl') (unit waagent.service)... Jan 15 12:49:11.226474 systemd[1]: Reloading... Jan 15 12:49:11.315061 zram_generator::config[1954]: No configuration found. Jan 15 12:49:11.430908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:11.521393 systemd[1]: Reloading finished in 294 ms. Jan 15 12:49:11.553508 waagent[1902]: 2025-01-15T12:49:11.553137Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 15 12:49:11.561095 systemd[1]: Reloading requested from client PID 2005 ('systemctl') (unit waagent.service)... Jan 15 12:49:11.561110 systemd[1]: Reloading... Jan 15 12:49:11.634127 zram_generator::config[2039]: No configuration found. Jan 15 12:49:11.750061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:11.840383 systemd[1]: Reloading finished in 278 ms. Jan 15 12:49:11.868023 waagent[1902]: 2025-01-15T12:49:11.867219Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 15 12:49:11.868023 waagent[1902]: 2025-01-15T12:49:11.867395Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 15 12:49:12.217251 waagent[1902]: 2025-01-15T12:49:12.217114Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 15 12:49:12.217951 waagent[1902]: 2025-01-15T12:49:12.217904Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 15 12:49:12.218984 waagent[1902]: 2025-01-15T12:49:12.218928Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 15 12:49:12.219481 waagent[1902]: 2025-01-15T12:49:12.219432Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 15 12:49:12.219887 waagent[1902]: 2025-01-15T12:49:12.219828Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 15 12:49:12.220120 waagent[1902]: 2025-01-15T12:49:12.220078Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:12.220205 waagent[1902]: 2025-01-15T12:49:12.220172Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:12.220367 waagent[1902]: 2025-01-15T12:49:12.220326Z INFO EnvHandler ExtHandler Configure routes Jan 15 12:49:12.220427 waagent[1902]: 2025-01-15T12:49:12.220398Z INFO EnvHandler ExtHandler Gateway:None Jan 15 12:49:12.220476 waagent[1902]: 2025-01-15T12:49:12.220450Z INFO EnvHandler ExtHandler Routes:None Jan 15 12:49:12.220831 waagent[1902]: 2025-01-15T12:49:12.220782Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 15 12:49:12.221207 waagent[1902]: 2025-01-15T12:49:12.220953Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:12.221350 waagent[1902]: 2025-01-15T12:49:12.221294Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:12.221570 waagent[1902]: 2025-01-15T12:49:12.221526Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 15 12:49:12.221755 waagent[1902]: 2025-01-15T12:49:12.221713Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 15 12:49:12.221755 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 15 12:49:12.221755 waagent[1902]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 15 12:49:12.221755 waagent[1902]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 15 12:49:12.221755 waagent[1902]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:12.221755 waagent[1902]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:12.221755 waagent[1902]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:12.222155 waagent[1902]: 2025-01-15T12:49:12.222100Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 15 12:49:12.222698 waagent[1902]: 2025-01-15T12:49:12.222248Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 15 12:49:12.223104 waagent[1902]: 2025-01-15T12:49:12.223066Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 15 12:49:12.231686 waagent[1902]: 2025-01-15T12:49:12.231627Z INFO ExtHandler ExtHandler Jan 15 12:49:12.231801 waagent[1902]: 2025-01-15T12:49:12.231744Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fd33fa15-8dd4-40c8-bd8f-8e5fce4ce6b3 correlation 031cf006-8f12-4a3f-8d4f-fd2c39f9eb06 created: 2025-01-15T12:47:54.092491Z] Jan 15 12:49:12.232236 waagent[1902]: 2025-01-15T12:49:12.232183Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 12:49:12.232827 waagent[1902]: 2025-01-15T12:49:12.232786Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 15 12:49:12.265879 waagent[1902]: 2025-01-15T12:49:12.265828Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F069DC3A-F8A5-4A94-86C5-097A4D3A9B3C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 15 12:49:12.333290 waagent[1902]: 2025-01-15T12:49:12.333198Z INFO MonitorHandler ExtHandler Network interfaces: Jan 15 12:49:12.333290 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: Jan 15 12:49:12.333290 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 15 12:49:12.333290 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:af:3c brd ff:ff:ff:ff:ff:ff Jan 15 12:49:12.333290 waagent[1902]: 3: enP6886s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:af:3c brd ff:ff:ff:ff:ff:ff\ altname enP6886p0s2 Jan 15 12:49:12.333290 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 15 12:49:12.333290 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 15 12:49:12.333290 waagent[1902]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 15 12:49:12.333290 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 15 12:49:12.333290 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 15 12:49:12.333290 waagent[1902]: 2: eth0 inet6 fe80::20d:3aff:fec5:af3c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:49:12.333290 waagent[1902]: 3: enP6886s1 inet6 fe80::20d:3aff:fec5:af3c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:49:12.375687 waagent[1902]: 2025-01-15T12:49:12.375331Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 15 12:49:12.375687 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:12.375687 waagent[1902]: pkts bytes target prot opt in out source destination Jan 15 12:49:12.375687 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:12.375687 waagent[1902]: pkts bytes target prot opt in out source destination Jan 15 12:49:12.375687 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:12.375687 waagent[1902]: pkts bytes target prot opt in out source destination Jan 15 12:49:12.375687 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:49:12.375687 waagent[1902]: 4 415 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:49:12.375687 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:49:12.380669 waagent[1902]: 2025-01-15T12:49:12.380563Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 15 12:49:12.380669 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:12.380669 waagent[1902]: pkts bytes target prot opt in out source destination Jan 15 12:49:12.380669 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:12.380669 waagent[1902]: pkts bytes target prot opt in out source destination Jan 15 12:49:12.380669 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:12.380669 waagent[1902]: pkts bytes target prot opt in out source destination Jan 15 12:49:12.380669 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:49:12.380669 waagent[1902]: 11 926 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:49:12.380669 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:49:12.380931 waagent[1902]: 2025-01-15T12:49:12.380896Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 15 12:49:17.492812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 12:49:17.498259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:17.614050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:17.618404 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:17.662269 kubelet[2134]: E0115 12:49:17.662185 2134 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:17.665557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:17.665686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:27.916346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 12:49:27.931272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:28.030894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:28.035913 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:28.085426 kubelet[2150]: E0115 12:49:28.085334 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:28.088603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:28.088752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:29.173742 chronyd[1677]: Selected source PHC0 Jan 15 12:49:38.339262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 12:49:38.358321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:38.470249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:38.470466 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:38.521827 kubelet[2167]: E0115 12:49:38.521754 2167 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:38.524448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:38.524574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:48.666394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 12:49:48.675267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:48.962727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:48.977364 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:49.028618 kubelet[2183]: E0115 12:49:49.028551 2183 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:49.030910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:49.031077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:49.951679 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 15 12:49:50.893778 update_engine[1702]: I20250115 12:49:50.893172 1702 update_attempter.cc:509] Updating boot flags... Jan 15 12:49:50.968326 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2204) Jan 15 12:49:51.071055 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2205) Jan 15 12:49:59.166244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 15 12:49:59.176283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:59.559978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:59.574466 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:59.621780 kubelet[2266]: E0115 12:49:59.621715 2266 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:59.624716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:59.624875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:01.371618 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 12:50:01.376284 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:44724.service - OpenSSH per-connection server daemon (10.200.16.10:44724). Jan 15 12:50:01.929617 sshd[2275]: Accepted publickey for core from 10.200.16.10 port 44724 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:01.931657 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:01.936073 systemd-logind[1693]: New session 3 of user core. Jan 15 12:50:01.953178 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 12:50:02.354417 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:44726.service - OpenSSH per-connection server daemon (10.200.16.10:44726). Jan 15 12:50:02.799545 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 44726 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:02.801151 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:02.805719 systemd-logind[1693]: New session 4 of user core. Jan 15 12:50:02.817211 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 12:50:03.145130 sshd[2280]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:03.149318 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:44726.service: Deactivated successfully. Jan 15 12:50:03.151801 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 12:50:03.152555 systemd-logind[1693]: Session 4 logged out. Waiting for processes to exit. Jan 15 12:50:03.153766 systemd-logind[1693]: Removed session 4. Jan 15 12:50:03.227796 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:44738.service - OpenSSH per-connection server daemon (10.200.16.10:44738). Jan 15 12:50:03.677735 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 44738 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:03.679354 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:03.683606 systemd-logind[1693]: New session 5 of user core. Jan 15 12:50:03.695222 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 12:50:04.019155 sshd[2287]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:04.023235 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:44738.service: Deactivated successfully. Jan 15 12:50:04.024761 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 12:50:04.026602 systemd-logind[1693]: Session 5 logged out. Waiting for processes to exit. Jan 15 12:50:04.027610 systemd-logind[1693]: Removed session 5. Jan 15 12:50:04.107337 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:44740.service - OpenSSH per-connection server daemon (10.200.16.10:44740). Jan 15 12:50:04.583478 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 44740 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:04.585405 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:04.590121 systemd-logind[1693]: New session 6 of user core. Jan 15 12:50:04.598195 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 12:50:04.945741 sshd[2294]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:04.950310 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:44740.service: Deactivated successfully. Jan 15 12:50:04.951933 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 12:50:04.953779 systemd-logind[1693]: Session 6 logged out. Waiting for processes to exit. Jan 15 12:50:04.954868 systemd-logind[1693]: Removed session 6. Jan 15 12:50:05.030457 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:44752.service - OpenSSH per-connection server daemon (10.200.16.10:44752). Jan 15 12:50:05.474850 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 44752 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:05.476308 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:05.480490 systemd-logind[1693]: New session 7 of user core. Jan 15 12:50:05.487164 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 12:50:05.840040 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 12:50:05.840335 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:05.857529 sudo[2304]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:05.945667 sshd[2301]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:05.949774 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:44752.service: Deactivated successfully. Jan 15 12:50:05.952484 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 12:50:05.953410 systemd-logind[1693]: Session 7 logged out. Waiting for processes to exit. Jan 15 12:50:05.954674 systemd-logind[1693]: Removed session 7. Jan 15 12:50:06.032576 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:35716.service - OpenSSH per-connection server daemon (10.200.16.10:35716). Jan 15 12:50:06.512221 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 35716 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:06.514100 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:06.517941 systemd-logind[1693]: New session 8 of user core. Jan 15 12:50:06.526264 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 12:50:06.784814 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 12:50:06.785356 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:06.788830 sudo[2313]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:06.793955 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 12:50:06.794360 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:06.811281 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 12:50:06.813900 auditctl[2316]: No rules Jan 15 12:50:06.814285 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 12:50:06.814458 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 12:50:06.817341 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:50:06.843043 augenrules[2334]: No rules Jan 15 12:50:06.844109 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:50:06.845560 sudo[2312]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:06.917070 sshd[2309]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:06.920885 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:35716.service: Deactivated successfully. Jan 15 12:50:06.922718 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 12:50:06.924298 systemd-logind[1693]: Session 8 logged out. Waiting for processes to exit. Jan 15 12:50:06.925715 systemd-logind[1693]: Removed session 8. Jan 15 12:50:07.002666 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:35732.service - OpenSSH per-connection server daemon (10.200.16.10:35732). Jan 15 12:50:07.462872 sshd[2342]: Accepted publickey for core from 10.200.16.10 port 35732 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:07.464434 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:07.468614 systemd-logind[1693]: New session 9 of user core. Jan 15 12:50:07.476167 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 12:50:07.717525 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 12:50:07.717799 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:08.580430 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 12:50:08.580490 (dockerd)[2362]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 12:50:09.051592 dockerd[2362]: time="2025-01-15T12:50:09.051020752Z" level=info msg="Starting up" Jan 15 12:50:09.469338 dockerd[2362]: time="2025-01-15T12:50:09.469289877Z" level=info msg="Loading containers: start." Jan 15 12:50:09.628024 kernel: Initializing XFRM netlink socket Jan 15 12:50:09.651659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 15 12:50:09.659425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:09.792092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:09.801308 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:09.850883 kubelet[2438]: E0115 12:50:09.850777 2438 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:09.853662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:09.853824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:09.900849 systemd-networkd[1609]: docker0: Link UP Jan 15 12:50:10.066352 dockerd[2362]: time="2025-01-15T12:50:10.066244616Z" level=info msg="Loading containers: done." Jan 15 12:50:10.097980 dockerd[2362]: time="2025-01-15T12:50:10.097596848Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 12:50:10.097980 dockerd[2362]: time="2025-01-15T12:50:10.097701808Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 12:50:10.097980 dockerd[2362]: time="2025-01-15T12:50:10.097811649Z" level=info msg="Daemon has completed initialization" Jan 15 12:50:10.165611 dockerd[2362]: time="2025-01-15T12:50:10.165208124Z" level=info msg="API listen on /run/docker.sock" Jan 15 12:50:10.166123 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 12:50:11.599034 containerd[1744]: time="2025-01-15T12:50:11.598735315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 15 12:50:12.550603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623906724.mount: Deactivated successfully. Jan 15 12:50:13.926368 containerd[1744]: time="2025-01-15T12:50:13.926306571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:13.934311 containerd[1744]: time="2025-01-15T12:50:13.934244549Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Jan 15 12:50:13.938173 containerd[1744]: time="2025-01-15T12:50:13.938100438Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:13.946181 containerd[1744]: time="2025-01-15T12:50:13.946105257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:13.947042 containerd[1744]: time="2025-01-15T12:50:13.946828458Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.348050103s" Jan 15 12:50:13.947042 containerd[1744]: time="2025-01-15T12:50:13.946863538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 15 12:50:13.975057 containerd[1744]: time="2025-01-15T12:50:13.974830923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 15 12:50:15.505192 containerd[1744]: time="2025-01-15T12:50:15.505113857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:15.509048 containerd[1744]: time="2025-01-15T12:50:15.508953986Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Jan 15 12:50:15.514357 containerd[1744]: time="2025-01-15T12:50:15.514289958Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:15.520158 containerd[1744]: time="2025-01-15T12:50:15.520080532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:15.521204 containerd[1744]: time="2025-01-15T12:50:15.521068254Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.546125291s" Jan 15 12:50:15.521204 containerd[1744]: time="2025-01-15T12:50:15.521111454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 15 12:50:15.543214 containerd[1744]: time="2025-01-15T12:50:15.543139545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 15 12:50:16.590564 containerd[1744]: time="2025-01-15T12:50:16.590450004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:16.593774 containerd[1744]: time="2025-01-15T12:50:16.593266010Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Jan 15 12:50:16.599864 containerd[1744]: time="2025-01-15T12:50:16.599799425Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:16.607212 containerd[1744]: time="2025-01-15T12:50:16.607144282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:16.608098 containerd[1744]: time="2025-01-15T12:50:16.607718404Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.064537139s" Jan 15 12:50:16.608098 containerd[1744]: time="2025-01-15T12:50:16.607755884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 15 12:50:16.631379 containerd[1744]: time="2025-01-15T12:50:16.631334818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 15 12:50:18.095363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862916065.mount: Deactivated successfully. Jan 15 12:50:18.462144 containerd[1744]: time="2025-01-15T12:50:18.461741745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:18.465698 containerd[1744]: time="2025-01-15T12:50:18.465616393Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 15 12:50:18.468946 containerd[1744]: time="2025-01-15T12:50:18.468858720Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:18.474589 containerd[1744]: time="2025-01-15T12:50:18.474522972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:18.475640 containerd[1744]: time="2025-01-15T12:50:18.475113933Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.843735195s" Jan 15 12:50:18.475640 containerd[1744]: time="2025-01-15T12:50:18.475151653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 15 12:50:18.498169 containerd[1744]: time="2025-01-15T12:50:18.498124422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 12:50:19.254205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061546699.mount: Deactivated successfully. Jan 15 12:50:19.916130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 15 12:50:19.924310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:20.037429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:20.042481 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:20.092905 kubelet[2650]: E0115 12:50:20.092820 2650 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:20.095108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:20.095233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:21.305046 containerd[1744]: time="2025-01-15T12:50:21.304971953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.310106 containerd[1744]: time="2025-01-15T12:50:21.309865164Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 15 12:50:21.315899 containerd[1744]: time="2025-01-15T12:50:21.315840256Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.324013 containerd[1744]: time="2025-01-15T12:50:21.323927434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.325435 containerd[1744]: time="2025-01-15T12:50:21.325235636Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.827024534s" Jan 15 12:50:21.325435 containerd[1744]: time="2025-01-15T12:50:21.325285836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 15 12:50:21.347404 containerd[1744]: time="2025-01-15T12:50:21.347137523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 15 12:50:21.936055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849416171.mount: Deactivated successfully. Jan 15 12:50:21.964288 containerd[1744]: time="2025-01-15T12:50:21.964173716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.967361 containerd[1744]: time="2025-01-15T12:50:21.967159362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 15 12:50:21.972890 containerd[1744]: time="2025-01-15T12:50:21.972826454Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.978368 containerd[1744]: time="2025-01-15T12:50:21.978296306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:21.979503 containerd[1744]: time="2025-01-15T12:50:21.979049627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 631.874224ms" Jan 15 12:50:21.979503 containerd[1744]: time="2025-01-15T12:50:21.979085867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 15 12:50:22.001411 containerd[1744]: time="2025-01-15T12:50:22.001359595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 15 12:50:22.703558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815310767.mount: Deactivated successfully. Jan 15 12:50:25.873098 containerd[1744]: time="2025-01-15T12:50:25.873037445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:25.876826 containerd[1744]: time="2025-01-15T12:50:25.876773654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 15 12:50:25.882163 containerd[1744]: time="2025-01-15T12:50:25.881985626Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:25.887164 containerd[1744]: time="2025-01-15T12:50:25.887094797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:25.888310 containerd[1744]: time="2025-01-15T12:50:25.888229320Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.886824085s" Jan 15 12:50:25.888310 containerd[1744]: time="2025-01-15T12:50:25.888272000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 15 12:50:30.166288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 15 12:50:30.173251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:30.441205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:30.452576 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:30.503180 kubelet[2797]: E0115 12:50:30.503124 2797 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:30.506902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:30.507365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:32.358121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:32.367334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:32.391276 systemd[1]: Reloading requested from client PID 2811 ('systemctl') (unit session-9.scope)... Jan 15 12:50:32.391292 systemd[1]: Reloading... Jan 15 12:50:32.521172 zram_generator::config[2851]: No configuration found. Jan 15 12:50:32.634140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:50:32.720769 systemd[1]: Reloading finished in 329 ms. Jan 15 12:50:32.769416 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 12:50:32.769500 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 12:50:32.769952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:32.775379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:32.879295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:32.885365 (kubelet)[2918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:50:32.939873 kubelet[2918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:32.939873 kubelet[2918]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:50:32.939873 kubelet[2918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:32.940254 kubelet[2918]: I0115 12:50:32.940050 2918 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:50:33.833536 kubelet[2918]: I0115 12:50:33.833493 2918 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 12:50:33.833536 kubelet[2918]: I0115 12:50:33.833528 2918 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:50:33.833767 kubelet[2918]: I0115 12:50:33.833746 2918 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 12:50:33.850764 kubelet[2918]: E0115 12:50:33.850720 2918 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.851094 kubelet[2918]: I0115 12:50:33.850950 2918 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:50:33.860293 kubelet[2918]: I0115 12:50:33.860263 2918 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:50:33.861762 kubelet[2918]: I0115 12:50:33.861728 2918 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:50:33.862010 kubelet[2918]: I0115 12:50:33.861958 2918 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 12:50:33.862119 kubelet[2918]: I0115 12:50:33.862097 2918 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:50:33.862119 kubelet[2918]: I0115 12:50:33.862108 2918 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 12:50:33.863632 kubelet[2918]: I0115 12:50:33.863603 2918 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:33.866151 kubelet[2918]: I0115 12:50:33.866116 2918 kubelet.go:396] "Attempting to sync node with API server" Jan 15 12:50:33.866151 kubelet[2918]: I0115 12:50:33.866152 2918 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:50:33.867795 kubelet[2918]: I0115 12:50:33.866177 2918 kubelet.go:312] "Adding apiserver pod source" Jan 15 12:50:33.867795 kubelet[2918]: I0115 12:50:33.866193 2918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:50:33.869686 kubelet[2918]: W0115 12:50:33.869333 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.869686 kubelet[2918]: E0115 12:50:33.869385 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.869686 kubelet[2918]: W0115 12:50:33.869629 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b8bd16053a&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.869686 kubelet[2918]: E0115 12:50:33.869663 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b8bd16053a&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.870390 kubelet[2918]: I0115 12:50:33.870371 2918 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:50:33.870750 kubelet[2918]: I0115 12:50:33.870729 2918 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:50:33.871663 kubelet[2918]: W0115 12:50:33.871641 2918 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 12:50:33.872404 kubelet[2918]: I0115 12:50:33.872382 2918 server.go:1256] "Started kubelet" Jan 15 12:50:33.874473 kubelet[2918]: I0115 12:50:33.874310 2918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:50:33.877685 kubelet[2918]: E0115 12:50:33.877658 2918 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-b8bd16053a.181adeab5c32da18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b8bd16053a,UID:ci-4081.3.0-a-b8bd16053a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b8bd16053a,},FirstTimestamp:2025-01-15 12:50:33.872357912 +0000 UTC m=+0.983161057,LastTimestamp:2025-01-15 12:50:33.872357912 +0000 UTC m=+0.983161057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b8bd16053a,}" Jan 15 12:50:33.880054 kubelet[2918]: I0115 12:50:33.879223 2918 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:50:33.880054 kubelet[2918]: I0115 12:50:33.879962 2918 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 12:50:33.880264 kubelet[2918]: I0115 12:50:33.880247 2918 server.go:461] "Adding debug handlers to kubelet server" Jan 15 12:50:33.881766 kubelet[2918]: I0115 12:50:33.881738 2918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:50:33.883339 kubelet[2918]: I0115 12:50:33.883318 2918 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:50:33.885728 kubelet[2918]: W0115 12:50:33.885601 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.885728 kubelet[2918]: E0115 12:50:33.885672 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.885929 kubelet[2918]: I0115 12:50:33.885868 2918 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:50:33.886568 kubelet[2918]: I0115 12:50:33.886499 2918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:50:33.886858 kubelet[2918]: E0115 12:50:33.886795 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b8bd16053a?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Jan 15 12:50:33.887330 kubelet[2918]: I0115 12:50:33.882188 2918 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 12:50:33.889516 kubelet[2918]: I0115 12:50:33.889442 2918 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:50:33.897424 kubelet[2918]: I0115 12:50:33.882266 2918 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 12:50:33.921241 kubelet[2918]: I0115 12:50:33.921218 2918 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:50:33.921611 kubelet[2918]: I0115 12:50:33.921399 2918 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:50:33.921611 kubelet[2918]: I0115 12:50:33.921422 2918 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:33.927117 kubelet[2918]: I0115 12:50:33.926947 2918 policy_none.go:49] "None policy: Start" Jan 15 12:50:33.929978 kubelet[2918]: I0115 12:50:33.929652 2918 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:50:33.929978 kubelet[2918]: I0115 12:50:33.929693 2918 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:50:33.932464 kubelet[2918]: I0115 12:50:33.932402 2918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:50:33.934204 kubelet[2918]: I0115 12:50:33.933769 2918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:50:33.934204 kubelet[2918]: I0115 12:50:33.933797 2918 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:50:33.934204 kubelet[2918]: I0115 12:50:33.933959 2918 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 12:50:33.934204 kubelet[2918]: E0115 12:50:33.934062 2918 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:50:33.935098 kubelet[2918]: W0115 12:50:33.934669 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.935098 kubelet[2918]: E0115 12:50:33.934705 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:33.943008 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 12:50:33.954524 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 12:50:33.958770 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 12:50:33.977909 kubelet[2918]: I0115 12:50:33.977875 2918 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:50:33.978716 kubelet[2918]: I0115 12:50:33.978288 2918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:50:33.980616 kubelet[2918]: E0115 12:50:33.980594 2918 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-b8bd16053a\" not found" Jan 15 12:50:33.983552 kubelet[2918]: I0115 12:50:33.983502 2918 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:33.983933 kubelet[2918]: E0115 12:50:33.983908 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.035016 kubelet[2918]: I0115 12:50:34.034902 2918 topology_manager.go:215] "Topology Admit Handler" podUID="51bfaa8384784a1355021fe7118454fc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.037206 kubelet[2918]: I0115 12:50:34.037020 2918 topology_manager.go:215] "Topology Admit Handler" podUID="bc46c5ec733e7cd990a74d688de65334" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.039443 kubelet[2918]: I0115 12:50:34.039347 2918 topology_manager.go:215] "Topology Admit Handler" podUID="9c540ecdf01be428eb7cd69fe455b7a8" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.046455 systemd[1]: Created slice kubepods-burstable-pod51bfaa8384784a1355021fe7118454fc.slice - libcontainer container kubepods-burstable-pod51bfaa8384784a1355021fe7118454fc.slice. Jan 15 12:50:34.062949 systemd[1]: Created slice kubepods-burstable-podbc46c5ec733e7cd990a74d688de65334.slice - libcontainer container kubepods-burstable-podbc46c5ec733e7cd990a74d688de65334.slice. Jan 15 12:50:34.086147 systemd[1]: Created slice kubepods-burstable-pod9c540ecdf01be428eb7cd69fe455b7a8.slice - libcontainer container kubepods-burstable-pod9c540ecdf01be428eb7cd69fe455b7a8.slice. Jan 15 12:50:34.087952 kubelet[2918]: E0115 12:50:34.087545 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b8bd16053a?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Jan 15 12:50:34.098349 kubelet[2918]: I0115 12:50:34.098251 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51bfaa8384784a1355021fe7118454fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b8bd16053a\" (UID: \"51bfaa8384784a1355021fe7118454fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098349 kubelet[2918]: I0115 12:50:34.098302 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098349 kubelet[2918]: I0115 12:50:34.098325 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098349 kubelet[2918]: I0115 12:50:34.098350 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098559 kubelet[2918]: I0115 12:50:34.098406 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098559 kubelet[2918]: I0115 12:50:34.098429 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c540ecdf01be428eb7cd69fe455b7a8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b8bd16053a\" (UID: \"9c540ecdf01be428eb7cd69fe455b7a8\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098559 kubelet[2918]: I0115 12:50:34.098471 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51bfaa8384784a1355021fe7118454fc-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b8bd16053a\" (UID: \"51bfaa8384784a1355021fe7118454fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098559 kubelet[2918]: I0115 12:50:34.098491 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51bfaa8384784a1355021fe7118454fc-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b8bd16053a\" (UID: \"51bfaa8384784a1355021fe7118454fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.098559 kubelet[2918]: I0115 12:50:34.098510 2918 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.186589 kubelet[2918]: I0115 12:50:34.186554 2918 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.187178 kubelet[2918]: E0115 12:50:34.187155 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.361515 containerd[1744]: time="2025-01-15T12:50:34.360951719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b8bd16053a,Uid:51bfaa8384784a1355021fe7118454fc,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:34.383961 containerd[1744]: time="2025-01-15T12:50:34.383830846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b8bd16053a,Uid:bc46c5ec733e7cd990a74d688de65334,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:34.391182 containerd[1744]: time="2025-01-15T12:50:34.390849340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b8bd16053a,Uid:9c540ecdf01be428eb7cd69fe455b7a8,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:34.488641 kubelet[2918]: E0115 12:50:34.488561 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b8bd16053a?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Jan 15 12:50:34.589417 kubelet[2918]: I0115 12:50:34.589353 2918 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.589911 kubelet[2918]: E0115 12:50:34.589844 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:34.856444 kubelet[2918]: W0115 12:50:34.856357 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:34.856444 kubelet[2918]: E0115 12:50:34.856423 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:34.923817 kubelet[2918]: W0115 12:50:34.923733 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:34.923817 kubelet[2918]: E0115 12:50:34.923779 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:34.938208 kubelet[2918]: W0115 12:50:34.938114 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b8bd16053a&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:34.938208 kubelet[2918]: E0115 12:50:34.938175 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b8bd16053a&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:34.967523 kubelet[2918]: W0115 12:50:34.967460 2918 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:34.967523 kubelet[2918]: E0115 12:50:34.967502 2918 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:35.096264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926695495.mount: Deactivated successfully. Jan 15 12:50:35.121097 containerd[1744]: time="2025-01-15T12:50:35.120123523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:35.133603 containerd[1744]: time="2025-01-15T12:50:35.133494030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 15 12:50:35.139091 containerd[1744]: time="2025-01-15T12:50:35.138368280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:35.142118 containerd[1744]: time="2025-01-15T12:50:35.141631887Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:35.145023 containerd[1744]: time="2025-01-15T12:50:35.144676773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:35.148611 containerd[1744]: time="2025-01-15T12:50:35.148352741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:50:35.151497 containerd[1744]: time="2025-01-15T12:50:35.151451627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:50:35.156295 containerd[1744]: time="2025-01-15T12:50:35.156245477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:50:35.157537 containerd[1744]: time="2025-01-15T12:50:35.157044039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 795.93288ms" Jan 15 12:50:35.161239 containerd[1744]: time="2025-01-15T12:50:35.161170767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 770.243747ms" Jan 15 12:50:35.181500 containerd[1744]: time="2025-01-15T12:50:35.181380369Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 797.461883ms" Jan 15 12:50:35.289941 kubelet[2918]: E0115 12:50:35.289896 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b8bd16053a?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Jan 15 12:50:35.392242 kubelet[2918]: I0115 12:50:35.392136 2918 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:35.392673 kubelet[2918]: E0115 12:50:35.392485 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:35.612752 kubelet[2918]: E0115 12:50:35.612704 2918 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-b8bd16053a.181adeab5c32da18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b8bd16053a,UID:ci-4081.3.0-a-b8bd16053a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b8bd16053a,},FirstTimestamp:2025-01-15 12:50:33.872357912 +0000 UTC m=+0.983161057,LastTimestamp:2025-01-15 12:50:33.872357912 +0000 UTC m=+0.983161057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b8bd16053a,}" Jan 15 12:50:35.746427 containerd[1744]: time="2025-01-15T12:50:35.745819172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:35.746427 containerd[1744]: time="2025-01-15T12:50:35.745919492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:35.746427 containerd[1744]: time="2025-01-15T12:50:35.746153213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:35.746427 containerd[1744]: time="2025-01-15T12:50:35.746236013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:35.753810 containerd[1744]: time="2025-01-15T12:50:35.753566628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:35.753810 containerd[1744]: time="2025-01-15T12:50:35.753628948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:35.753810 containerd[1744]: time="2025-01-15T12:50:35.753647948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:35.754153 containerd[1744]: time="2025-01-15T12:50:35.753732548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:35.760974 containerd[1744]: time="2025-01-15T12:50:35.760418642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:35.760974 containerd[1744]: time="2025-01-15T12:50:35.760478282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:35.760974 containerd[1744]: time="2025-01-15T12:50:35.760494242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:35.760974 containerd[1744]: time="2025-01-15T12:50:35.760580203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:35.791218 systemd[1]: Started cri-containerd-81368e3c767a674ca84459a22c749aac713d550f860460b11a8830903e2948d2.scope - libcontainer container 81368e3c767a674ca84459a22c749aac713d550f860460b11a8830903e2948d2. Jan 15 12:50:35.796100 systemd[1]: Started cri-containerd-9454a629a2ebd54682c41d1449cba886b32f9a96020a2c9566304049183aa006.scope - libcontainer container 9454a629a2ebd54682c41d1449cba886b32f9a96020a2c9566304049183aa006. Jan 15 12:50:35.802126 systemd[1]: Started cri-containerd-1a1ae771ffd03021820cd6617d756a9d709a68eee5880918c95a97ac71205862.scope - libcontainer container 1a1ae771ffd03021820cd6617d756a9d709a68eee5880918c95a97ac71205862. Jan 15 12:50:35.852062 containerd[1744]: time="2025-01-15T12:50:35.851840951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b8bd16053a,Uid:bc46c5ec733e7cd990a74d688de65334,Namespace:kube-system,Attempt:0,} returns sandbox id \"9454a629a2ebd54682c41d1449cba886b32f9a96020a2c9566304049183aa006\"" Jan 15 12:50:35.862654 containerd[1744]: time="2025-01-15T12:50:35.862134372Z" level=info msg="CreateContainer within sandbox \"9454a629a2ebd54682c41d1449cba886b32f9a96020a2c9566304049183aa006\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 12:50:35.871432 containerd[1744]: time="2025-01-15T12:50:35.870975190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b8bd16053a,Uid:51bfaa8384784a1355021fe7118454fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"81368e3c767a674ca84459a22c749aac713d550f860460b11a8830903e2948d2\"" Jan 15 12:50:35.875263 containerd[1744]: time="2025-01-15T12:50:35.875117079Z" level=info msg="CreateContainer within sandbox \"81368e3c767a674ca84459a22c749aac713d550f860460b11a8830903e2948d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 12:50:35.880862 containerd[1744]: time="2025-01-15T12:50:35.880770810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b8bd16053a,Uid:9c540ecdf01be428eb7cd69fe455b7a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a1ae771ffd03021820cd6617d756a9d709a68eee5880918c95a97ac71205862\"" Jan 15 12:50:35.884412 containerd[1744]: time="2025-01-15T12:50:35.884269617Z" level=info msg="CreateContainer within sandbox \"1a1ae771ffd03021820cd6617d756a9d709a68eee5880918c95a97ac71205862\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 12:50:35.932743 containerd[1744]: time="2025-01-15T12:50:35.932570917Z" level=info msg="CreateContainer within sandbox \"9454a629a2ebd54682c41d1449cba886b32f9a96020a2c9566304049183aa006\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"99ee77d07647d6bf230443f4ce2704e2997fe27cf2abfe24ec4a5fecc8fe2d19\"" Jan 15 12:50:35.934402 containerd[1744]: time="2025-01-15T12:50:35.933599759Z" level=info msg="StartContainer for \"99ee77d07647d6bf230443f4ce2704e2997fe27cf2abfe24ec4a5fecc8fe2d19\"" Jan 15 12:50:35.954742 containerd[1744]: time="2025-01-15T12:50:35.954368442Z" level=info msg="CreateContainer within sandbox \"1a1ae771ffd03021820cd6617d756a9d709a68eee5880918c95a97ac71205862\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9cd831bdc9db6dd4f7960c9cda5cb3f6034319a82636164798fa970f7b10b73c\"" Jan 15 12:50:35.955438 containerd[1744]: time="2025-01-15T12:50:35.955233724Z" level=info msg="StartContainer for \"9cd831bdc9db6dd4f7960c9cda5cb3f6034319a82636164798fa970f7b10b73c\"" Jan 15 12:50:35.957654 containerd[1744]: time="2025-01-15T12:50:35.957591689Z" level=info msg="CreateContainer within sandbox \"81368e3c767a674ca84459a22c749aac713d550f860460b11a8830903e2948d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c41cd47c64ba24b9fe6ae05f3d312f92fb7c88f549917590eaf9d1f123853e3c\"" Jan 15 12:50:35.958507 containerd[1744]: time="2025-01-15T12:50:35.958445210Z" level=info msg="StartContainer for \"c41cd47c64ba24b9fe6ae05f3d312f92fb7c88f549917590eaf9d1f123853e3c\"" Jan 15 12:50:35.968582 systemd[1]: Started cri-containerd-99ee77d07647d6bf230443f4ce2704e2997fe27cf2abfe24ec4a5fecc8fe2d19.scope - libcontainer container 99ee77d07647d6bf230443f4ce2704e2997fe27cf2abfe24ec4a5fecc8fe2d19. Jan 15 12:50:35.980650 kubelet[2918]: E0115 12:50:35.980358 2918 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.38:6443: connect: connection refused Jan 15 12:50:36.022361 systemd[1]: Started cri-containerd-9cd831bdc9db6dd4f7960c9cda5cb3f6034319a82636164798fa970f7b10b73c.scope - libcontainer container 9cd831bdc9db6dd4f7960c9cda5cb3f6034319a82636164798fa970f7b10b73c. Jan 15 12:50:36.023516 systemd[1]: Started cri-containerd-c41cd47c64ba24b9fe6ae05f3d312f92fb7c88f549917590eaf9d1f123853e3c.scope - libcontainer container c41cd47c64ba24b9fe6ae05f3d312f92fb7c88f549917590eaf9d1f123853e3c. Jan 15 12:50:36.497750 containerd[1744]: time="2025-01-15T12:50:36.497663201Z" level=info msg="StartContainer for \"9cd831bdc9db6dd4f7960c9cda5cb3f6034319a82636164798fa970f7b10b73c\" returns successfully" Jan 15 12:50:36.497881 containerd[1744]: time="2025-01-15T12:50:36.497826482Z" level=info msg="StartContainer for \"99ee77d07647d6bf230443f4ce2704e2997fe27cf2abfe24ec4a5fecc8fe2d19\" returns successfully" Jan 15 12:50:36.497881 containerd[1744]: time="2025-01-15T12:50:36.497852482Z" level=info msg="StartContainer for \"c41cd47c64ba24b9fe6ae05f3d312f92fb7c88f549917590eaf9d1f123853e3c\" returns successfully" Jan 15 12:50:36.994337 kubelet[2918]: I0115 12:50:36.994243 2918 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:38.285912 kubelet[2918]: I0115 12:50:38.285836 2918 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:38.288763 kubelet[2918]: E0115 12:50:38.288709 2918 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-b8bd16053a\" not found" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:38.871158 kubelet[2918]: I0115 12:50:38.871085 2918 apiserver.go:52] "Watching apiserver" Jan 15 12:50:38.887932 kubelet[2918]: I0115 12:50:38.887864 2918 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 12:50:38.970860 kubelet[2918]: E0115 12:50:38.970798 2918 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:38.971432 kubelet[2918]: E0115 12:50:38.971372 2918 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-b8bd16053a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:41.066141 systemd[1]: Reloading requested from client PID 3190 ('systemctl') (unit session-9.scope)... Jan 15 12:50:41.066160 systemd[1]: Reloading... Jan 15 12:50:41.199111 zram_generator::config[3236]: No configuration found. Jan 15 12:50:41.307940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:50:41.414484 systemd[1]: Reloading finished in 347 ms. Jan 15 12:50:41.458679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:41.464756 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 12:50:41.464968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:41.465084 systemd[1]: kubelet.service: Consumed 1.332s CPU time, 114.1M memory peak, 0B memory swap peak. Jan 15 12:50:41.478429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:41.594327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:41.605387 (kubelet)[3294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:50:41.659984 kubelet[3294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:41.659984 kubelet[3294]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:50:41.659984 kubelet[3294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:50:41.659984 kubelet[3294]: I0115 12:50:41.660127 3294 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:50:41.666642 kubelet[3294]: I0115 12:50:41.665426 3294 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 12:50:41.666642 kubelet[3294]: I0115 12:50:41.665484 3294 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:50:41.666642 kubelet[3294]: I0115 12:50:41.665673 3294 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 12:50:41.668169 kubelet[3294]: I0115 12:50:41.668145 3294 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 12:50:41.670277 kubelet[3294]: I0115 12:50:41.670236 3294 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:50:41.682321 kubelet[3294]: I0115 12:50:41.682286 3294 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:50:41.682504 kubelet[3294]: I0115 12:50:41.682485 3294 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:50:41.682680 kubelet[3294]: I0115 12:50:41.682656 3294 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 12:50:41.682680 kubelet[3294]: I0115 12:50:41.682680 3294 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:50:41.682788 kubelet[3294]: I0115 12:50:41.682689 3294 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 12:50:41.682788 kubelet[3294]: I0115 12:50:41.682719 3294 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:41.682845 kubelet[3294]: I0115 12:50:41.682820 3294 kubelet.go:396] "Attempting to sync node with API server" Jan 15 12:50:41.682845 kubelet[3294]: I0115 12:50:41.682834 3294 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:50:41.682886 kubelet[3294]: I0115 12:50:41.682852 3294 kubelet.go:312] "Adding apiserver pod source" Jan 15 12:50:41.682886 kubelet[3294]: I0115 12:50:41.682866 3294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:50:41.683932 kubelet[3294]: I0115 12:50:41.683904 3294 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:50:41.684231 kubelet[3294]: I0115 12:50:41.684218 3294 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:50:41.684909 kubelet[3294]: I0115 12:50:41.684894 3294 server.go:1256] "Started kubelet" Jan 15 12:50:41.686841 kubelet[3294]: I0115 12:50:41.686809 3294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:50:41.693503 kubelet[3294]: I0115 12:50:41.693388 3294 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:50:41.695437 kubelet[3294]: I0115 12:50:41.694345 3294 server.go:461] "Adding debug handlers to kubelet server" Jan 15 12:50:41.695437 kubelet[3294]: I0115 12:50:41.695346 3294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:50:41.695547 kubelet[3294]: I0115 12:50:41.695520 3294 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:50:41.696388 kubelet[3294]: I0115 12:50:41.696173 3294 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 12:50:41.696388 kubelet[3294]: I0115 12:50:41.696258 3294 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 12:50:41.696388 kubelet[3294]: I0115 12:50:41.696380 3294 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 12:50:41.701516 kubelet[3294]: I0115 12:50:41.700935 3294 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:50:41.704588 kubelet[3294]: I0115 12:50:41.704137 3294 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:50:41.708785 kubelet[3294]: I0115 12:50:41.708741 3294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:50:41.713259 kubelet[3294]: E0115 12:50:41.710972 3294 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 12:50:41.713259 kubelet[3294]: I0115 12:50:41.711130 3294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:50:41.713259 kubelet[3294]: I0115 12:50:41.711150 3294 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:50:41.713259 kubelet[3294]: I0115 12:50:41.711170 3294 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 12:50:41.713259 kubelet[3294]: E0115 12:50:41.711219 3294 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:50:41.730597 kubelet[3294]: I0115 12:50:41.730570 3294 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:50:41.799880 kubelet[3294]: I0115 12:50:41.799852 3294 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:41.809313 kubelet[3294]: I0115 12:50:41.809266 3294 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:50:41.809313 kubelet[3294]: I0115 12:50:41.809291 3294 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:50:41.809313 kubelet[3294]: I0115 12:50:41.809312 3294 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:50:41.809498 kubelet[3294]: I0115 12:50:41.809465 3294 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 12:50:41.809498 kubelet[3294]: I0115 12:50:41.809486 3294 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 12:50:41.809498 kubelet[3294]: I0115 12:50:41.809494 3294 policy_none.go:49] "None policy: Start" Jan 15 12:50:41.810674 kubelet[3294]: I0115 12:50:41.810635 3294 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:50:41.810674 kubelet[3294]: I0115 12:50:41.810668 3294 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:50:41.810906 kubelet[3294]: I0115 12:50:41.810828 3294 state_mem.go:75] "Updated machine memory state" Jan 15 12:50:41.811518 kubelet[3294]: E0115 12:50:41.811401 3294 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 12:50:41.812230 kubelet[3294]: I0115 12:50:41.812189 3294 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:41.812388 kubelet[3294]: I0115 12:50:41.812267 3294 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:41.819813 kubelet[3294]: I0115 12:50:41.819764 3294 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:50:41.820100 kubelet[3294]: I0115 12:50:41.820075 3294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:50:42.013789 kubelet[3294]: I0115 12:50:42.012375 3294 topology_manager.go:215] "Topology Admit Handler" podUID="51bfaa8384784a1355021fe7118454fc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.013789 kubelet[3294]: I0115 12:50:42.012476 3294 topology_manager.go:215] "Topology Admit Handler" podUID="bc46c5ec733e7cd990a74d688de65334" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.013789 kubelet[3294]: I0115 12:50:42.012531 3294 topology_manager.go:215] "Topology Admit Handler" podUID="9c540ecdf01be428eb7cd69fe455b7a8" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.022015 kubelet[3294]: W0115 12:50:42.021795 3294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:42.026126 kubelet[3294]: W0115 12:50:42.026100 3294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:42.026549 kubelet[3294]: W0115 12:50:42.026327 3294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:50:42.099027 kubelet[3294]: I0115 12:50:42.098296 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099027 kubelet[3294]: I0115 12:50:42.098342 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099027 kubelet[3294]: I0115 12:50:42.098364 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c540ecdf01be428eb7cd69fe455b7a8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b8bd16053a\" (UID: \"9c540ecdf01be428eb7cd69fe455b7a8\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099027 kubelet[3294]: I0115 12:50:42.098383 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51bfaa8384784a1355021fe7118454fc-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b8bd16053a\" (UID: \"51bfaa8384784a1355021fe7118454fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099027 kubelet[3294]: I0115 12:50:42.098415 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099281 kubelet[3294]: I0115 12:50:42.098437 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099281 kubelet[3294]: I0115 12:50:42.098458 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc46c5ec733e7cd990a74d688de65334-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b8bd16053a\" (UID: \"bc46c5ec733e7cd990a74d688de65334\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099281 kubelet[3294]: I0115 12:50:42.098476 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51bfaa8384784a1355021fe7118454fc-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b8bd16053a\" (UID: \"51bfaa8384784a1355021fe7118454fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.099281 kubelet[3294]: I0115 12:50:42.098496 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51bfaa8384784a1355021fe7118454fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b8bd16053a\" (UID: \"51bfaa8384784a1355021fe7118454fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" Jan 15 12:50:42.691724 kubelet[3294]: I0115 12:50:42.691674 3294 apiserver.go:52] "Watching apiserver" Jan 15 12:50:42.797420 kubelet[3294]: I0115 12:50:42.797327 3294 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 12:50:42.877408 kubelet[3294]: I0115 12:50:42.877309 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b8bd16053a" podStartSLOduration=0.877262104 podStartE2EDuration="877.262104ms" podCreationTimestamp="2025-01-15 12:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:42.844864593 +0000 UTC m=+1.234435153" watchObservedRunningTime="2025-01-15 12:50:42.877262104 +0000 UTC m=+1.266832664" Jan 15 12:50:42.916789 kubelet[3294]: I0115 12:50:42.916598 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b8bd16053a" podStartSLOduration=0.916555911 podStartE2EDuration="916.555911ms" podCreationTimestamp="2025-01-15 12:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:42.879107788 +0000 UTC m=+1.268678348" watchObservedRunningTime="2025-01-15 12:50:42.916555911 +0000 UTC m=+1.306126471" Jan 15 12:50:42.969767 kubelet[3294]: I0115 12:50:42.968752 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b8bd16053a" podStartSLOduration=0.968711785 podStartE2EDuration="968.711785ms" podCreationTimestamp="2025-01-15 12:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:42.917256232 +0000 UTC m=+1.306826792" watchObservedRunningTime="2025-01-15 12:50:42.968711785 +0000 UTC m=+1.358282345" Jan 15 12:50:46.827684 sudo[2345]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:46.907797 sshd[2342]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:46.912430 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:35732.service: Deactivated successfully. Jan 15 12:50:46.915238 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 12:50:46.915460 systemd[1]: session-9.scope: Consumed 7.787s CPU time, 187.2M memory peak, 0B memory swap peak. Jan 15 12:50:46.918736 systemd-logind[1693]: Session 9 logged out. Waiting for processes to exit. Jan 15 12:50:46.922362 systemd-logind[1693]: Removed session 9. Jan 15 12:50:55.388948 kubelet[3294]: I0115 12:50:55.388915 3294 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 12:50:55.390279 containerd[1744]: time="2025-01-15T12:50:55.389437314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 12:50:55.390843 kubelet[3294]: I0115 12:50:55.390453 3294 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 12:50:56.199166 kubelet[3294]: I0115 12:50:56.198521 3294 topology_manager.go:215] "Topology Admit Handler" podUID="312f0134-ef52-4cea-8fcf-20f70f3f0971" podNamespace="kube-system" podName="kube-proxy-vkvwt" Jan 15 12:50:56.207946 systemd[1]: Created slice kubepods-besteffort-pod312f0134_ef52_4cea_8fcf_20f70f3f0971.slice - libcontainer container kubepods-besteffort-pod312f0134_ef52_4cea_8fcf_20f70f3f0971.slice. Jan 15 12:50:56.390148 kubelet[3294]: I0115 12:50:56.390100 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/312f0134-ef52-4cea-8fcf-20f70f3f0971-kube-proxy\") pod \"kube-proxy-vkvwt\" (UID: \"312f0134-ef52-4cea-8fcf-20f70f3f0971\") " pod="kube-system/kube-proxy-vkvwt" Jan 15 12:50:56.390148 kubelet[3294]: I0115 12:50:56.390148 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/312f0134-ef52-4cea-8fcf-20f70f3f0971-xtables-lock\") pod \"kube-proxy-vkvwt\" (UID: \"312f0134-ef52-4cea-8fcf-20f70f3f0971\") " pod="kube-system/kube-proxy-vkvwt" Jan 15 12:50:56.390622 kubelet[3294]: I0115 12:50:56.390170 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/312f0134-ef52-4cea-8fcf-20f70f3f0971-lib-modules\") pod \"kube-proxy-vkvwt\" (UID: \"312f0134-ef52-4cea-8fcf-20f70f3f0971\") " pod="kube-system/kube-proxy-vkvwt" Jan 15 12:50:56.390622 kubelet[3294]: I0115 12:50:56.390194 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2fc\" (UniqueName: \"kubernetes.io/projected/312f0134-ef52-4cea-8fcf-20f70f3f0971-kube-api-access-ps2fc\") pod \"kube-proxy-vkvwt\" (UID: \"312f0134-ef52-4cea-8fcf-20f70f3f0971\") " pod="kube-system/kube-proxy-vkvwt" Jan 15 12:50:56.436500 kubelet[3294]: I0115 12:50:56.436459 3294 topology_manager.go:215] "Topology Admit Handler" podUID="8b77b2e8-63c6-4baa-a7c6-8c21da833dc3" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-pnv9f" Jan 15 12:50:56.445035 systemd[1]: Created slice kubepods-besteffort-pod8b77b2e8_63c6_4baa_a7c6_8c21da833dc3.slice - libcontainer container kubepods-besteffort-pod8b77b2e8_63c6_4baa_a7c6_8c21da833dc3.slice. Jan 15 12:50:56.517134 containerd[1744]: time="2025-01-15T12:50:56.517023016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vkvwt,Uid:312f0134-ef52-4cea-8fcf-20f70f3f0971,Namespace:kube-system,Attempt:0,}" Jan 15 12:50:56.572611 containerd[1744]: time="2025-01-15T12:50:56.572448487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:56.572611 containerd[1744]: time="2025-01-15T12:50:56.572515168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:56.572611 containerd[1744]: time="2025-01-15T12:50:56.572530328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:56.573116 containerd[1744]: time="2025-01-15T12:50:56.572648768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:56.592465 kubelet[3294]: I0115 12:50:56.592435 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wz25\" (UniqueName: \"kubernetes.io/projected/8b77b2e8-63c6-4baa-a7c6-8c21da833dc3-kube-api-access-4wz25\") pod \"tigera-operator-c7ccbd65-pnv9f\" (UID: \"8b77b2e8-63c6-4baa-a7c6-8c21da833dc3\") " pod="tigera-operator/tigera-operator-c7ccbd65-pnv9f" Jan 15 12:50:56.592914 kubelet[3294]: I0115 12:50:56.592634 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b77b2e8-63c6-4baa-a7c6-8c21da833dc3-var-lib-calico\") pod \"tigera-operator-c7ccbd65-pnv9f\" (UID: \"8b77b2e8-63c6-4baa-a7c6-8c21da833dc3\") " pod="tigera-operator/tigera-operator-c7ccbd65-pnv9f" Jan 15 12:50:56.596257 systemd[1]: Started cri-containerd-48eb92ed4485a669fd71ebafbb90639fab39c41f03adcb36eebba3a27e794a15.scope - libcontainer container 48eb92ed4485a669fd71ebafbb90639fab39c41f03adcb36eebba3a27e794a15. Jan 15 12:50:56.620269 containerd[1744]: time="2025-01-15T12:50:56.620073823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vkvwt,Uid:312f0134-ef52-4cea-8fcf-20f70f3f0971,Namespace:kube-system,Attempt:0,} returns sandbox id \"48eb92ed4485a669fd71ebafbb90639fab39c41f03adcb36eebba3a27e794a15\"" Jan 15 12:50:56.624662 containerd[1744]: time="2025-01-15T12:50:56.624596392Z" level=info msg="CreateContainer within sandbox \"48eb92ed4485a669fd71ebafbb90639fab39c41f03adcb36eebba3a27e794a15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 12:50:56.672062 containerd[1744]: time="2025-01-15T12:50:56.671936607Z" level=info msg="CreateContainer within sandbox \"48eb92ed4485a669fd71ebafbb90639fab39c41f03adcb36eebba3a27e794a15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a87797c7a02d2cce903ebc7ce9d01cb48a2483e52f552dfcad365aee1da9288\"" Jan 15 12:50:56.676339 containerd[1744]: time="2025-01-15T12:50:56.673195610Z" level=info msg="StartContainer for \"2a87797c7a02d2cce903ebc7ce9d01cb48a2483e52f552dfcad365aee1da9288\"" Jan 15 12:50:56.708236 systemd[1]: Started cri-containerd-2a87797c7a02d2cce903ebc7ce9d01cb48a2483e52f552dfcad365aee1da9288.scope - libcontainer container 2a87797c7a02d2cce903ebc7ce9d01cb48a2483e52f552dfcad365aee1da9288. Jan 15 12:50:56.739632 containerd[1744]: time="2025-01-15T12:50:56.739475942Z" level=info msg="StartContainer for \"2a87797c7a02d2cce903ebc7ce9d01cb48a2483e52f552dfcad365aee1da9288\" returns successfully" Jan 15 12:50:56.748927 containerd[1744]: time="2025-01-15T12:50:56.748847521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-pnv9f,Uid:8b77b2e8-63c6-4baa-a7c6-8c21da833dc3,Namespace:tigera-operator,Attempt:0,}" Jan 15 12:50:56.803605 containerd[1744]: time="2025-01-15T12:50:56.803058830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:50:56.803605 containerd[1744]: time="2025-01-15T12:50:56.803117670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:50:56.803605 containerd[1744]: time="2025-01-15T12:50:56.803138710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:56.803605 containerd[1744]: time="2025-01-15T12:50:56.803440711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:50:56.844207 systemd[1]: Started cri-containerd-5d57c834c849c75b4a2bfe0691ba3cc7e2652bd270c035cacadbfe12295496be.scope - libcontainer container 5d57c834c849c75b4a2bfe0691ba3cc7e2652bd270c035cacadbfe12295496be. Jan 15 12:50:56.876623 containerd[1744]: time="2025-01-15T12:50:56.876583178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-pnv9f,Uid:8b77b2e8-63c6-4baa-a7c6-8c21da833dc3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5d57c834c849c75b4a2bfe0691ba3cc7e2652bd270c035cacadbfe12295496be\"" Jan 15 12:50:56.880757 containerd[1744]: time="2025-01-15T12:50:56.880405105Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 15 12:50:59.746581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866823236.mount: Deactivated successfully. Jan 15 12:51:00.267570 containerd[1744]: time="2025-01-15T12:51:00.267511616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:00.271513 containerd[1744]: time="2025-01-15T12:51:00.271450263Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125972" Jan 15 12:51:00.276735 containerd[1744]: time="2025-01-15T12:51:00.275671072Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:00.281129 containerd[1744]: time="2025-01-15T12:51:00.281078522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:00.282139 containerd[1744]: time="2025-01-15T12:51:00.282108484Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.401655619s" Jan 15 12:51:00.282265 containerd[1744]: time="2025-01-15T12:51:00.282249164Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 15 12:51:00.284911 containerd[1744]: time="2025-01-15T12:51:00.284860090Z" level=info msg="CreateContainer within sandbox \"5d57c834c849c75b4a2bfe0691ba3cc7e2652bd270c035cacadbfe12295496be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 12:51:00.331494 containerd[1744]: time="2025-01-15T12:51:00.331436380Z" level=info msg="CreateContainer within sandbox \"5d57c834c849c75b4a2bfe0691ba3cc7e2652bd270c035cacadbfe12295496be\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"99bcfec179d6501962c9a651a6ab3f0f21bfa8df20c6d2c06e7415829fd4f2ce\"" Jan 15 12:51:00.333131 containerd[1744]: time="2025-01-15T12:51:00.332178342Z" level=info msg="StartContainer for \"99bcfec179d6501962c9a651a6ab3f0f21bfa8df20c6d2c06e7415829fd4f2ce\"" Jan 15 12:51:00.360193 systemd[1]: Started cri-containerd-99bcfec179d6501962c9a651a6ab3f0f21bfa8df20c6d2c06e7415829fd4f2ce.scope - libcontainer container 99bcfec179d6501962c9a651a6ab3f0f21bfa8df20c6d2c06e7415829fd4f2ce. Jan 15 12:51:00.395075 containerd[1744]: time="2025-01-15T12:51:00.394422983Z" level=info msg="StartContainer for \"99bcfec179d6501962c9a651a6ab3f0f21bfa8df20c6d2c06e7415829fd4f2ce\" returns successfully" Jan 15 12:51:00.854096 kubelet[3294]: I0115 12:51:00.853393 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vkvwt" podStartSLOduration=4.853349798 podStartE2EDuration="4.853349798s" podCreationTimestamp="2025-01-15 12:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:50:56.836976178 +0000 UTC m=+15.226546738" watchObservedRunningTime="2025-01-15 12:51:00.853349798 +0000 UTC m=+19.242920358" Jan 15 12:51:04.137112 kubelet[3294]: I0115 12:51:04.136933 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-pnv9f" podStartSLOduration=4.732286579 podStartE2EDuration="8.136882083s" podCreationTimestamp="2025-01-15 12:50:56 +0000 UTC" firstStartedPulling="2025-01-15 12:50:56.878092501 +0000 UTC m=+15.267663021" lastFinishedPulling="2025-01-15 12:51:00.282687965 +0000 UTC m=+18.672258525" observedRunningTime="2025-01-15 12:51:00.854980762 +0000 UTC m=+19.244551322" watchObservedRunningTime="2025-01-15 12:51:04.136882083 +0000 UTC m=+22.526452643" Jan 15 12:51:04.139392 kubelet[3294]: I0115 12:51:04.138901 3294 topology_manager.go:215] "Topology Admit Handler" podUID="f03b2820-3059-4391-990c-d7a1b202b860" podNamespace="calico-system" podName="calico-typha-8599b4fcd5-55zb8" Jan 15 12:51:04.152212 systemd[1]: Created slice kubepods-besteffort-podf03b2820_3059_4391_990c_d7a1b202b860.slice - libcontainer container kubepods-besteffort-podf03b2820_3059_4391_990c_d7a1b202b860.slice. Jan 15 12:51:04.306782 kubelet[3294]: I0115 12:51:04.306709 3294 topology_manager.go:215] "Topology Admit Handler" podUID="28cc28b6-2bc0-4c81-969e-829565a31e9a" podNamespace="calico-system" podName="calico-node-kmqvp" Jan 15 12:51:04.313786 kubelet[3294]: W0115 12:51:04.313718 3294 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081.3.0-a-b8bd16053a" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-b8bd16053a' and this object Jan 15 12:51:04.313786 kubelet[3294]: E0115 12:51:04.313756 3294 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081.3.0-a-b8bd16053a" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-b8bd16053a' and this object Jan 15 12:51:04.314453 kubelet[3294]: W0115 12:51:04.314414 3294 reflector.go:539] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081.3.0-a-b8bd16053a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-b8bd16053a' and this object Jan 15 12:51:04.314453 kubelet[3294]: E0115 12:51:04.314437 3294 reflector.go:147] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081.3.0-a-b8bd16053a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-b8bd16053a' and this object Jan 15 12:51:04.316290 systemd[1]: Created slice kubepods-besteffort-pod28cc28b6_2bc0_4c81_969e_829565a31e9a.slice - libcontainer container kubepods-besteffort-pod28cc28b6_2bc0_4c81_969e_829565a31e9a.slice. Jan 15 12:51:04.339586 kubelet[3294]: I0115 12:51:04.339511 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f03b2820-3059-4391-990c-d7a1b202b860-tigera-ca-bundle\") pod \"calico-typha-8599b4fcd5-55zb8\" (UID: \"f03b2820-3059-4391-990c-d7a1b202b860\") " pod="calico-system/calico-typha-8599b4fcd5-55zb8" Jan 15 12:51:04.339586 kubelet[3294]: I0115 12:51:04.339565 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f03b2820-3059-4391-990c-d7a1b202b860-typha-certs\") pod \"calico-typha-8599b4fcd5-55zb8\" (UID: \"f03b2820-3059-4391-990c-d7a1b202b860\") " pod="calico-system/calico-typha-8599b4fcd5-55zb8" Jan 15 12:51:04.339767 kubelet[3294]: I0115 12:51:04.339605 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2trr\" (UniqueName: \"kubernetes.io/projected/f03b2820-3059-4391-990c-d7a1b202b860-kube-api-access-n2trr\") pod \"calico-typha-8599b4fcd5-55zb8\" (UID: \"f03b2820-3059-4391-990c-d7a1b202b860\") " pod="calico-system/calico-typha-8599b4fcd5-55zb8" Jan 15 12:51:04.426127 kubelet[3294]: I0115 12:51:04.425846 3294 topology_manager.go:215] "Topology Admit Handler" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" podNamespace="calico-system" podName="csi-node-driver-qbhfp" Jan 15 12:51:04.427134 kubelet[3294]: E0115 12:51:04.426945 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:04.442838 kubelet[3294]: I0115 12:51:04.440068 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28cc28b6-2bc0-4c81-969e-829565a31e9a-tigera-ca-bundle\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.442838 kubelet[3294]: I0115 12:51:04.440111 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwqj\" (UniqueName: \"kubernetes.io/projected/28cc28b6-2bc0-4c81-969e-829565a31e9a-kube-api-access-nqwqj\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.442838 kubelet[3294]: I0115 12:51:04.440152 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-lib-modules\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.442838 kubelet[3294]: I0115 12:51:04.440174 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-var-lib-calico\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.442838 kubelet[3294]: I0115 12:51:04.440197 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-var-run-calico\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.443172 kubelet[3294]: I0115 12:51:04.440227 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-xtables-lock\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.443172 kubelet[3294]: I0115 12:51:04.440246 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-flexvol-driver-host\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.443172 kubelet[3294]: I0115 12:51:04.440271 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-policysync\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.443172 kubelet[3294]: I0115 12:51:04.440292 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-cni-log-dir\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.443172 kubelet[3294]: I0115 12:51:04.440312 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-cni-net-dir\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.443282 kubelet[3294]: I0115 12:51:04.440331 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/28cc28b6-2bc0-4c81-969e-829565a31e9a-node-certs\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.443282 kubelet[3294]: I0115 12:51:04.440353 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/28cc28b6-2bc0-4c81-969e-829565a31e9a-cni-bin-dir\") pod \"calico-node-kmqvp\" (UID: \"28cc28b6-2bc0-4c81-969e-829565a31e9a\") " pod="calico-system/calico-node-kmqvp" Jan 15 12:51:04.541225 kubelet[3294]: I0115 12:51:04.541152 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5q7c\" (UniqueName: \"kubernetes.io/projected/7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03-kube-api-access-v5q7c\") pod \"csi-node-driver-qbhfp\" (UID: \"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03\") " pod="calico-system/csi-node-driver-qbhfp" Jan 15 12:51:04.541225 kubelet[3294]: I0115 12:51:04.541223 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03-varrun\") pod \"csi-node-driver-qbhfp\" (UID: \"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03\") " pod="calico-system/csi-node-driver-qbhfp" Jan 15 12:51:04.541435 kubelet[3294]: I0115 12:51:04.541246 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03-socket-dir\") pod \"csi-node-driver-qbhfp\" (UID: \"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03\") " pod="calico-system/csi-node-driver-qbhfp" Jan 15 12:51:04.541435 kubelet[3294]: I0115 12:51:04.541330 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03-kubelet-dir\") pod \"csi-node-driver-qbhfp\" (UID: \"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03\") " pod="calico-system/csi-node-driver-qbhfp" Jan 15 12:51:04.541435 kubelet[3294]: I0115 12:51:04.541352 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03-registration-dir\") pod \"csi-node-driver-qbhfp\" (UID: \"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03\") " pod="calico-system/csi-node-driver-qbhfp" Jan 15 12:51:04.544048 kubelet[3294]: E0115 12:51:04.543973 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.544231 kubelet[3294]: W0115 12:51:04.544212 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.544382 kubelet[3294]: E0115 12:51:04.544368 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.545590 kubelet[3294]: E0115 12:51:04.545303 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.546090 kubelet[3294]: W0115 12:51:04.546064 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.546222 kubelet[3294]: E0115 12:51:04.546208 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.547590 kubelet[3294]: E0115 12:51:04.547569 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.547694 kubelet[3294]: W0115 12:51:04.547680 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.547812 kubelet[3294]: E0115 12:51:04.547802 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.548403 kubelet[3294]: E0115 12:51:04.548388 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.548511 kubelet[3294]: W0115 12:51:04.548496 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.548591 kubelet[3294]: E0115 12:51:04.548579 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.548970 kubelet[3294]: E0115 12:51:04.548954 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.549128 kubelet[3294]: W0115 12:51:04.549104 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.549399 kubelet[3294]: E0115 12:51:04.549291 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.549893 kubelet[3294]: E0115 12:51:04.549799 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.549893 kubelet[3294]: W0115 12:51:04.549813 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.549893 kubelet[3294]: E0115 12:51:04.549855 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.550599 kubelet[3294]: E0115 12:51:04.550477 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.550599 kubelet[3294]: W0115 12:51:04.550536 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.551045 kubelet[3294]: E0115 12:51:04.550894 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.551045 kubelet[3294]: W0115 12:51:04.550906 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.551045 kubelet[3294]: E0115 12:51:04.551035 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.551347 kubelet[3294]: E0115 12:51:04.551064 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.551347 kubelet[3294]: E0115 12:51:04.551265 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.551347 kubelet[3294]: W0115 12:51:04.551320 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.551653 kubelet[3294]: E0115 12:51:04.551595 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.552065 kubelet[3294]: E0115 12:51:04.551938 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.552065 kubelet[3294]: W0115 12:51:04.551998 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.552222 kubelet[3294]: E0115 12:51:04.552182 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.552907 kubelet[3294]: E0115 12:51:04.552800 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.552907 kubelet[3294]: W0115 12:51:04.552819 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.552907 kubelet[3294]: E0115 12:51:04.552869 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.553258 kubelet[3294]: E0115 12:51:04.553162 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.553258 kubelet[3294]: W0115 12:51:04.553178 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.553445 kubelet[3294]: E0115 12:51:04.553366 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.553445 kubelet[3294]: W0115 12:51:04.553375 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.553445 kubelet[3294]: E0115 12:51:04.553386 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.553445 kubelet[3294]: E0115 12:51:04.553390 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.553621 kubelet[3294]: E0115 12:51:04.553596 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.553621 kubelet[3294]: W0115 12:51:04.553607 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.553672 kubelet[3294]: E0115 12:51:04.553624 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.553923 kubelet[3294]: E0115 12:51:04.553843 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.553923 kubelet[3294]: W0115 12:51:04.553859 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.553923 kubelet[3294]: E0115 12:51:04.553873 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.554273 kubelet[3294]: E0115 12:51:04.554200 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.554273 kubelet[3294]: W0115 12:51:04.554213 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.554273 kubelet[3294]: E0115 12:51:04.554246 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.573228 kubelet[3294]: E0115 12:51:04.573124 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.573228 kubelet[3294]: W0115 12:51:04.573150 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.573228 kubelet[3294]: E0115 12:51:04.573181 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.642668 kubelet[3294]: E0115 12:51:04.642504 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.642668 kubelet[3294]: W0115 12:51:04.642527 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.642668 kubelet[3294]: E0115 12:51:04.642551 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.642668 kubelet[3294]: E0115 12:51:04.642756 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.642668 kubelet[3294]: W0115 12:51:04.642765 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.642668 kubelet[3294]: E0115 12:51:04.642777 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.644667 kubelet[3294]: E0115 12:51:04.644405 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.644667 kubelet[3294]: W0115 12:51:04.644465 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.644667 kubelet[3294]: E0115 12:51:04.644488 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.645203 kubelet[3294]: E0115 12:51:04.645084 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.645203 kubelet[3294]: W0115 12:51:04.645098 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.645203 kubelet[3294]: E0115 12:51:04.645143 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.645545 kubelet[3294]: E0115 12:51:04.645418 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.645545 kubelet[3294]: W0115 12:51:04.645431 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.645698 kubelet[3294]: E0115 12:51:04.645648 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.646185 kubelet[3294]: E0115 12:51:04.645978 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.646185 kubelet[3294]: W0115 12:51:04.646096 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.646802 kubelet[3294]: E0115 12:51:04.646347 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.646802 kubelet[3294]: E0115 12:51:04.646516 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.646802 kubelet[3294]: W0115 12:51:04.646529 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.646802 kubelet[3294]: E0115 12:51:04.646547 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.647218 kubelet[3294]: E0115 12:51:04.647197 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.647368 kubelet[3294]: W0115 12:51:04.647291 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.647368 kubelet[3294]: E0115 12:51:04.647357 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.648326 kubelet[3294]: E0115 12:51:04.648307 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.648927 kubelet[3294]: W0115 12:51:04.648414 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.648927 kubelet[3294]: E0115 12:51:04.648465 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.649779 kubelet[3294]: E0115 12:51:04.649709 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.649779 kubelet[3294]: W0115 12:51:04.649727 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.649779 kubelet[3294]: E0115 12:51:04.649753 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.650580 kubelet[3294]: E0115 12:51:04.650147 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.650580 kubelet[3294]: W0115 12:51:04.650162 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.650580 kubelet[3294]: E0115 12:51:04.650185 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.651164 kubelet[3294]: E0115 12:51:04.651010 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.651164 kubelet[3294]: W0115 12:51:04.651028 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.651164 kubelet[3294]: E0115 12:51:04.651085 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.652077 kubelet[3294]: E0115 12:51:04.651898 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.652077 kubelet[3294]: W0115 12:51:04.651916 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.652077 kubelet[3294]: E0115 12:51:04.651950 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.652460 kubelet[3294]: E0115 12:51:04.652203 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.652460 kubelet[3294]: W0115 12:51:04.652212 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.653099 kubelet[3294]: E0115 12:51:04.652592 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.653396 kubelet[3294]: E0115 12:51:04.653287 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.653396 kubelet[3294]: W0115 12:51:04.653301 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.653646 kubelet[3294]: E0115 12:51:04.653575 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.653873 kubelet[3294]: E0115 12:51:04.653801 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.653873 kubelet[3294]: W0115 12:51:04.653813 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.654246 kubelet[3294]: E0115 12:51:04.654232 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.654555 kubelet[3294]: E0115 12:51:04.654543 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.654777 kubelet[3294]: W0115 12:51:04.654672 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.655028 kubelet[3294]: E0115 12:51:04.654838 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.655487 kubelet[3294]: E0115 12:51:04.655383 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.655487 kubelet[3294]: W0115 12:51:04.655398 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.655681 kubelet[3294]: E0115 12:51:04.655623 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.656269 kubelet[3294]: E0115 12:51:04.656253 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.656545 kubelet[3294]: W0115 12:51:04.656383 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.656723 kubelet[3294]: E0115 12:51:04.656697 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.657174 kubelet[3294]: E0115 12:51:04.656945 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.657174 kubelet[3294]: W0115 12:51:04.656957 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.657464 kubelet[3294]: E0115 12:51:04.657356 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.657705 kubelet[3294]: E0115 12:51:04.657593 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.657705 kubelet[3294]: W0115 12:51:04.657655 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.658162 kubelet[3294]: E0115 12:51:04.657808 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.658806 kubelet[3294]: E0115 12:51:04.658585 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.658806 kubelet[3294]: W0115 12:51:04.658603 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.658806 kubelet[3294]: E0115 12:51:04.658621 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.659433 kubelet[3294]: E0115 12:51:04.659210 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.659433 kubelet[3294]: W0115 12:51:04.659224 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.659433 kubelet[3294]: E0115 12:51:04.659240 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.659870 kubelet[3294]: E0115 12:51:04.659712 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.659870 kubelet[3294]: W0115 12:51:04.659731 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.660569 kubelet[3294]: E0115 12:51:04.660260 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.661219 kubelet[3294]: E0115 12:51:04.661202 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.661408 kubelet[3294]: W0115 12:51:04.661355 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.661858 kubelet[3294]: E0115 12:51:04.661748 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.661858 kubelet[3294]: E0115 12:51:04.661825 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.661858 kubelet[3294]: W0115 12:51:04.661921 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.661858 kubelet[3294]: E0115 12:51:04.661934 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.681291 kubelet[3294]: E0115 12:51:04.681151 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.682109 kubelet[3294]: W0115 12:51:04.681425 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.682109 kubelet[3294]: E0115 12:51:04.681461 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.752912 kubelet[3294]: E0115 12:51:04.752883 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.753195 kubelet[3294]: W0115 12:51:04.753118 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.753195 kubelet[3294]: E0115 12:51:04.753151 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.763103 containerd[1744]: time="2025-01-15T12:51:04.763066064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8599b4fcd5-55zb8,Uid:f03b2820-3059-4391-990c-d7a1b202b860,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:04.823075 containerd[1744]: time="2025-01-15T12:51:04.821606738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:04.823075 containerd[1744]: time="2025-01-15T12:51:04.821678058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:04.823075 containerd[1744]: time="2025-01-15T12:51:04.821689698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:04.823338 containerd[1744]: time="2025-01-15T12:51:04.823139021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:04.842727 systemd[1]: Started cri-containerd-01553c2e3d125ef92c906460c053b07c55ff18c867f95c78dcd9549ff2a15afa.scope - libcontainer container 01553c2e3d125ef92c906460c053b07c55ff18c867f95c78dcd9549ff2a15afa. Jan 15 12:51:04.854502 kubelet[3294]: E0115 12:51:04.854381 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.854502 kubelet[3294]: W0115 12:51:04.854401 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.854502 kubelet[3294]: E0115 12:51:04.854423 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:04.881150 containerd[1744]: time="2025-01-15T12:51:04.880816774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8599b4fcd5-55zb8,Uid:f03b2820-3059-4391-990c-d7a1b202b860,Namespace:calico-system,Attempt:0,} returns sandbox id \"01553c2e3d125ef92c906460c053b07c55ff18c867f95c78dcd9549ff2a15afa\"" Jan 15 12:51:04.883161 containerd[1744]: time="2025-01-15T12:51:04.882857818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 15 12:51:04.955662 kubelet[3294]: E0115 12:51:04.955416 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:04.955662 kubelet[3294]: W0115 12:51:04.955502 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:04.955662 kubelet[3294]: E0115 12:51:04.955526 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.056822 kubelet[3294]: E0115 12:51:05.056753 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.056822 kubelet[3294]: W0115 12:51:05.056776 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.056822 kubelet[3294]: E0115 12:51:05.056798 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.158233 kubelet[3294]: E0115 12:51:05.158171 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.158233 kubelet[3294]: W0115 12:51:05.158197 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.158590 kubelet[3294]: E0115 12:51:05.158247 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.259351 kubelet[3294]: E0115 12:51:05.259221 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.259351 kubelet[3294]: W0115 12:51:05.259246 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.259351 kubelet[3294]: E0115 12:51:05.259270 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.355584 kubelet[3294]: E0115 12:51:05.355434 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:05.355584 kubelet[3294]: W0115 12:51:05.355464 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:05.355584 kubelet[3294]: E0115 12:51:05.355487 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:05.521703 containerd[1744]: time="2025-01-15T12:51:05.521395345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kmqvp,Uid:28cc28b6-2bc0-4c81-969e-829565a31e9a,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:05.573789 containerd[1744]: time="2025-01-15T12:51:05.573326128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:05.573789 containerd[1744]: time="2025-01-15T12:51:05.573527208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:05.573789 containerd[1744]: time="2025-01-15T12:51:05.573545088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.573789 containerd[1744]: time="2025-01-15T12:51:05.573677167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.601158 systemd[1]: Started cri-containerd-2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb.scope - libcontainer container 2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb. Jan 15 12:51:05.624362 containerd[1744]: time="2025-01-15T12:51:05.624324873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kmqvp,Uid:28cc28b6-2bc0-4c81-969e-829565a31e9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb\"" Jan 15 12:51:06.447480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884492111.mount: Deactivated successfully. Jan 15 12:51:06.712664 kubelet[3294]: E0115 12:51:06.712537 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:06.944628 containerd[1744]: time="2025-01-15T12:51:06.944567621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:06.947189 containerd[1744]: time="2025-01-15T12:51:06.947086497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 15 12:51:06.950104 containerd[1744]: time="2025-01-15T12:51:06.949914131Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:06.954140 containerd[1744]: time="2025-01-15T12:51:06.954080684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:06.954700 containerd[1744]: time="2025-01-15T12:51:06.954629123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.071690945s" Jan 15 12:51:06.954700 containerd[1744]: time="2025-01-15T12:51:06.954667083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 15 12:51:06.955664 containerd[1744]: time="2025-01-15T12:51:06.955229642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 15 12:51:06.971637 containerd[1744]: time="2025-01-15T12:51:06.971402051Z" level=info msg="CreateContainer within sandbox \"01553c2e3d125ef92c906460c053b07c55ff18c867f95c78dcd9549ff2a15afa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 12:51:07.002659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724298175.mount: Deactivated successfully. Jan 15 12:51:07.014475 containerd[1744]: time="2025-01-15T12:51:07.014398572Z" level=info msg="CreateContainer within sandbox \"01553c2e3d125ef92c906460c053b07c55ff18c867f95c78dcd9549ff2a15afa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ddc709fab94b4a2a5ab9454ab6f9aca2dbbc07e49df08f31ec63f4e5684866ca\"" Jan 15 12:51:07.015601 containerd[1744]: time="2025-01-15T12:51:07.015372370Z" level=info msg="StartContainer for \"ddc709fab94b4a2a5ab9454ab6f9aca2dbbc07e49df08f31ec63f4e5684866ca\"" Jan 15 12:51:07.044234 systemd[1]: Started cri-containerd-ddc709fab94b4a2a5ab9454ab6f9aca2dbbc07e49df08f31ec63f4e5684866ca.scope - libcontainer container ddc709fab94b4a2a5ab9454ab6f9aca2dbbc07e49df08f31ec63f4e5684866ca. Jan 15 12:51:07.082818 containerd[1744]: time="2025-01-15T12:51:07.082730885Z" level=info msg="StartContainer for \"ddc709fab94b4a2a5ab9454ab6f9aca2dbbc07e49df08f31ec63f4e5684866ca\" returns successfully" Jan 15 12:51:07.863846 kubelet[3294]: E0115 12:51:07.863643 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.863846 kubelet[3294]: W0115 12:51:07.863671 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.863846 kubelet[3294]: E0115 12:51:07.863695 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.864360 kubelet[3294]: E0115 12:51:07.863926 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.864360 kubelet[3294]: W0115 12:51:07.864023 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.864360 kubelet[3294]: E0115 12:51:07.864041 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.865429 kubelet[3294]: E0115 12:51:07.864472 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.865429 kubelet[3294]: W0115 12:51:07.864491 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.865429 kubelet[3294]: E0115 12:51:07.864611 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.865429 kubelet[3294]: E0115 12:51:07.865131 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.865429 kubelet[3294]: W0115 12:51:07.865246 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.865429 kubelet[3294]: E0115 12:51:07.865265 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.865889 kubelet[3294]: E0115 12:51:07.865812 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.865889 kubelet[3294]: W0115 12:51:07.865832 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.865957 kubelet[3294]: E0115 12:51:07.865946 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.868134 kubelet[3294]: E0115 12:51:07.866956 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.868134 kubelet[3294]: W0115 12:51:07.866977 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.868134 kubelet[3294]: E0115 12:51:07.867025 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.868134 kubelet[3294]: E0115 12:51:07.867627 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.868134 kubelet[3294]: W0115 12:51:07.867645 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.868134 kubelet[3294]: E0115 12:51:07.867660 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.868395 kubelet[3294]: E0115 12:51:07.868278 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.868395 kubelet[3294]: W0115 12:51:07.868290 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.868395 kubelet[3294]: E0115 12:51:07.868304 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.868967 kubelet[3294]: E0115 12:51:07.868874 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.868967 kubelet[3294]: W0115 12:51:07.868893 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.868967 kubelet[3294]: E0115 12:51:07.868908 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.869736 kubelet[3294]: E0115 12:51:07.869671 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.869736 kubelet[3294]: W0115 12:51:07.869689 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.869736 kubelet[3294]: E0115 12:51:07.869706 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.870438 kubelet[3294]: E0115 12:51:07.870345 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.870438 kubelet[3294]: W0115 12:51:07.870365 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.870438 kubelet[3294]: E0115 12:51:07.870379 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.871539 kubelet[3294]: E0115 12:51:07.871445 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.871539 kubelet[3294]: W0115 12:51:07.871464 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.871539 kubelet[3294]: E0115 12:51:07.871484 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.873123 kubelet[3294]: E0115 12:51:07.873052 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.873123 kubelet[3294]: W0115 12:51:07.873081 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.873123 kubelet[3294]: E0115 12:51:07.873103 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.873451 kubelet[3294]: E0115 12:51:07.873360 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.873451 kubelet[3294]: W0115 12:51:07.873375 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.873451 kubelet[3294]: E0115 12:51:07.873388 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.873547 kubelet[3294]: E0115 12:51:07.873539 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.873547 kubelet[3294]: W0115 12:51:07.873546 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.873589 kubelet[3294]: E0115 12:51:07.873556 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.882362 kubelet[3294]: E0115 12:51:07.881688 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.882362 kubelet[3294]: W0115 12:51:07.881712 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.882362 kubelet[3294]: E0115 12:51:07.881734 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.885113 kubelet[3294]: E0115 12:51:07.885027 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.885113 kubelet[3294]: W0115 12:51:07.885055 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.885113 kubelet[3294]: E0115 12:51:07.885079 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.885443 kubelet[3294]: E0115 12:51:07.885374 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.885443 kubelet[3294]: W0115 12:51:07.885389 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.885443 kubelet[3294]: E0115 12:51:07.885404 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.886443 kubelet[3294]: E0115 12:51:07.886386 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.886443 kubelet[3294]: W0115 12:51:07.886410 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.886600 kubelet[3294]: E0115 12:51:07.886546 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.887745 kubelet[3294]: E0115 12:51:07.887678 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.887745 kubelet[3294]: W0115 12:51:07.887697 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.887858 kubelet[3294]: E0115 12:51:07.887788 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.888102 kubelet[3294]: E0115 12:51:07.887968 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.888102 kubelet[3294]: W0115 12:51:07.888016 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.888360 kubelet[3294]: E0115 12:51:07.888306 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.888832 kubelet[3294]: E0115 12:51:07.888466 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.888832 kubelet[3294]: W0115 12:51:07.888480 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.888832 kubelet[3294]: E0115 12:51:07.888583 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.888832 kubelet[3294]: E0115 12:51:07.888709 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.888832 kubelet[3294]: W0115 12:51:07.888717 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.888832 kubelet[3294]: E0115 12:51:07.888730 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.889093 kubelet[3294]: E0115 12:51:07.888925 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.889093 kubelet[3294]: W0115 12:51:07.888933 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.889093 kubelet[3294]: E0115 12:51:07.888950 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.890184 kubelet[3294]: E0115 12:51:07.889345 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.890184 kubelet[3294]: W0115 12:51:07.889359 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.890184 kubelet[3294]: E0115 12:51:07.889374 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.890184 kubelet[3294]: E0115 12:51:07.889569 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.890184 kubelet[3294]: W0115 12:51:07.889577 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.890184 kubelet[3294]: E0115 12:51:07.889594 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.890184 kubelet[3294]: E0115 12:51:07.889773 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.890184 kubelet[3294]: W0115 12:51:07.889780 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.890184 kubelet[3294]: E0115 12:51:07.889841 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.890536 kubelet[3294]: E0115 12:51:07.890326 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.890536 kubelet[3294]: W0115 12:51:07.890336 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.890536 kubelet[3294]: E0115 12:51:07.890463 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.890536 kubelet[3294]: W0115 12:51:07.890470 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.890633 kubelet[3294]: E0115 12:51:07.890586 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.890633 kubelet[3294]: W0115 12:51:07.890593 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.890633 kubelet[3294]: E0115 12:51:07.890616 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.890696 kubelet[3294]: E0115 12:51:07.890643 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.891419 kubelet[3294]: E0115 12:51:07.890732 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.891419 kubelet[3294]: E0115 12:51:07.890867 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.891419 kubelet[3294]: W0115 12:51:07.890904 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.891419 kubelet[3294]: E0115 12:51:07.890916 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.891419 kubelet[3294]: E0115 12:51:07.891204 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.891419 kubelet[3294]: W0115 12:51:07.891218 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.891419 kubelet[3294]: E0115 12:51:07.891230 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:07.891678 kubelet[3294]: E0115 12:51:07.891614 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:07.891678 kubelet[3294]: W0115 12:51:07.891629 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:07.891678 kubelet[3294]: E0115 12:51:07.891642 3294 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:08.451959 containerd[1744]: time="2025-01-15T12:51:08.451837797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:08.455761 containerd[1744]: time="2025-01-15T12:51:08.455432764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 15 12:51:08.460032 containerd[1744]: time="2025-01-15T12:51:08.459953974Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:08.464664 containerd[1744]: time="2025-01-15T12:51:08.464553344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:08.465366 containerd[1744]: time="2025-01-15T12:51:08.465248745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.509983104s" Jan 15 12:51:08.465366 containerd[1744]: time="2025-01-15T12:51:08.465293505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 15 12:51:08.468322 containerd[1744]: time="2025-01-15T12:51:08.468253432Z" level=info msg="CreateContainer within sandbox \"2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 12:51:08.516152 containerd[1744]: time="2025-01-15T12:51:08.516070613Z" level=info msg="CreateContainer within sandbox \"2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137\"" Jan 15 12:51:08.517088 containerd[1744]: time="2025-01-15T12:51:08.517062255Z" level=info msg="StartContainer for \"f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137\"" Jan 15 12:51:08.559388 systemd[1]: Started cri-containerd-f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137.scope - libcontainer container f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137. Jan 15 12:51:08.602257 containerd[1744]: time="2025-01-15T12:51:08.602175236Z" level=info msg="StartContainer for \"f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137\" returns successfully" Jan 15 12:51:08.618822 systemd[1]: cri-containerd-f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137.scope: Deactivated successfully. Jan 15 12:51:08.645449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137-rootfs.mount: Deactivated successfully. Jan 15 12:51:08.712589 kubelet[3294]: E0115 12:51:08.711915 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:08.863647 kubelet[3294]: I0115 12:51:08.862838 3294 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:08.883978 kubelet[3294]: I0115 12:51:08.883251 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8599b4fcd5-55zb8" podStartSLOduration=2.810750729 podStartE2EDuration="4.883209874s" podCreationTimestamp="2025-01-15 12:51:04 +0000 UTC" firstStartedPulling="2025-01-15 12:51:04.882510097 +0000 UTC m=+23.272080657" lastFinishedPulling="2025-01-15 12:51:06.954969282 +0000 UTC m=+25.344539802" observedRunningTime="2025-01-15 12:51:07.88407415 +0000 UTC m=+26.273644710" watchObservedRunningTime="2025-01-15 12:51:08.883209874 +0000 UTC m=+27.272780434" Jan 15 12:51:09.561261 containerd[1744]: time="2025-01-15T12:51:09.561174435Z" level=info msg="shim disconnected" id=f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137 namespace=k8s.io Jan 15 12:51:09.561261 containerd[1744]: time="2025-01-15T12:51:09.561240315Z" level=warning msg="cleaning up after shim disconnected" id=f9ef751d1a50d47d5465602677f0a444b25b9c4987f1237e7bb65d22ff33e137 namespace=k8s.io Jan 15 12:51:09.561261 containerd[1744]: time="2025-01-15T12:51:09.561248275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:51:09.869087 containerd[1744]: time="2025-01-15T12:51:09.868820128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 15 12:51:10.711486 kubelet[3294]: E0115 12:51:10.711409 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:12.712898 kubelet[3294]: E0115 12:51:12.712225 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:13.400515 containerd[1744]: time="2025-01-15T12:51:13.400406414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:13.405574 containerd[1744]: time="2025-01-15T12:51:13.405346543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 15 12:51:13.409847 containerd[1744]: time="2025-01-15T12:51:13.409779191Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:13.415494 containerd[1744]: time="2025-01-15T12:51:13.415417121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:13.416781 containerd[1744]: time="2025-01-15T12:51:13.416081002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.547217034s" Jan 15 12:51:13.416781 containerd[1744]: time="2025-01-15T12:51:13.416116122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 15 12:51:13.419353 containerd[1744]: time="2025-01-15T12:51:13.419323008Z" level=info msg="CreateContainer within sandbox \"2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 12:51:13.468957 containerd[1744]: time="2025-01-15T12:51:13.468914898Z" level=info msg="CreateContainer within sandbox \"2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488\"" Jan 15 12:51:13.469964 containerd[1744]: time="2025-01-15T12:51:13.469842699Z" level=info msg="StartContainer for \"3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488\"" Jan 15 12:51:13.507198 systemd[1]: Started cri-containerd-3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488.scope - libcontainer container 3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488. Jan 15 12:51:13.540926 containerd[1744]: time="2025-01-15T12:51:13.540803668Z" level=info msg="StartContainer for \"3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488\" returns successfully" Jan 15 12:51:14.676257 containerd[1744]: time="2025-01-15T12:51:14.676179202Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 12:51:14.679408 systemd[1]: cri-containerd-3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488.scope: Deactivated successfully. Jan 15 12:51:14.700977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488-rootfs.mount: Deactivated successfully. Jan 15 12:51:14.712302 kubelet[3294]: E0115 12:51:14.712251 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.721316 3294 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.754591 3294 topology_manager.go:215] "Topology Admit Handler" podUID="5aee6d16-2980-4221-9d0f-dbd0eaab2ab3" podNamespace="kube-system" podName="coredns-76f75df574-psqf5" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.761944 3294 topology_manager.go:215] "Topology Admit Handler" podUID="22ef187a-1055-4e64-99a2-f81f790e5b7a" podNamespace="kube-system" podName="coredns-76f75df574-hpmct" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.764006 3294 topology_manager.go:215] "Topology Admit Handler" podUID="e669e4c4-4197-45bf-985d-bac7d7c912eb" podNamespace="calico-system" podName="calico-kube-controllers-697f49564b-jvb87" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.765473 3294 topology_manager.go:215] "Topology Admit Handler" podUID="dd1c71d7-7e6f-4176-b57a-00a3f42a566e" podNamespace="calico-apiserver" podName="calico-apiserver-64bc7cdd6d-87bn5" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.766447 3294 topology_manager.go:215] "Topology Admit Handler" podUID="b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace" podNamespace="calico-apiserver" podName="calico-apiserver-64bc7cdd6d-9bxp5" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.835050 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzx6\" (UniqueName: \"kubernetes.io/projected/22ef187a-1055-4e64-99a2-f81f790e5b7a-kube-api-access-9jzx6\") pod \"coredns-76f75df574-hpmct\" (UID: \"22ef187a-1055-4e64-99a2-f81f790e5b7a\") " pod="kube-system/coredns-76f75df574-hpmct" Jan 15 12:51:15.045297 kubelet[3294]: I0115 12:51:14.835095 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ef187a-1055-4e64-99a2-f81f790e5b7a-config-volume\") pod \"coredns-76f75df574-hpmct\" (UID: \"22ef187a-1055-4e64-99a2-f81f790e5b7a\") " pod="kube-system/coredns-76f75df574-hpmct" Jan 15 12:51:14.766856 systemd[1]: Created slice kubepods-burstable-pod5aee6d16_2980_4221_9d0f_dbd0eaab2ab3.slice - libcontainer container kubepods-burstable-pod5aee6d16_2980_4221_9d0f_dbd0eaab2ab3.slice. Jan 15 12:51:15.045660 kubelet[3294]: I0115 12:51:14.835123 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmnsw\" (UniqueName: \"kubernetes.io/projected/e669e4c4-4197-45bf-985d-bac7d7c912eb-kube-api-access-cmnsw\") pod \"calico-kube-controllers-697f49564b-jvb87\" (UID: \"e669e4c4-4197-45bf-985d-bac7d7c912eb\") " pod="calico-system/calico-kube-controllers-697f49564b-jvb87" Jan 15 12:51:15.045660 kubelet[3294]: I0115 12:51:14.835204 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd1c71d7-7e6f-4176-b57a-00a3f42a566e-calico-apiserver-certs\") pod \"calico-apiserver-64bc7cdd6d-87bn5\" (UID: \"dd1c71d7-7e6f-4176-b57a-00a3f42a566e\") " pod="calico-apiserver/calico-apiserver-64bc7cdd6d-87bn5" Jan 15 12:51:15.045660 kubelet[3294]: I0115 12:51:14.835307 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e669e4c4-4197-45bf-985d-bac7d7c912eb-tigera-ca-bundle\") pod \"calico-kube-controllers-697f49564b-jvb87\" (UID: \"e669e4c4-4197-45bf-985d-bac7d7c912eb\") " pod="calico-system/calico-kube-controllers-697f49564b-jvb87" Jan 15 12:51:15.045660 kubelet[3294]: I0115 12:51:14.835358 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gctfz\" (UniqueName: \"kubernetes.io/projected/5aee6d16-2980-4221-9d0f-dbd0eaab2ab3-kube-api-access-gctfz\") pod \"coredns-76f75df574-psqf5\" (UID: \"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3\") " pod="kube-system/coredns-76f75df574-psqf5" Jan 15 12:51:15.045660 kubelet[3294]: I0115 12:51:14.835398 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace-calico-apiserver-certs\") pod \"calico-apiserver-64bc7cdd6d-9bxp5\" (UID: \"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace\") " pod="calico-apiserver/calico-apiserver-64bc7cdd6d-9bxp5" Jan 15 12:51:14.777572 systemd[1]: Created slice kubepods-burstable-pod22ef187a_1055_4e64_99a2_f81f790e5b7a.slice - libcontainer container kubepods-burstable-pod22ef187a_1055_4e64_99a2_f81f790e5b7a.slice. Jan 15 12:51:15.045836 kubelet[3294]: I0115 12:51:14.835452 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aee6d16-2980-4221-9d0f-dbd0eaab2ab3-config-volume\") pod \"coredns-76f75df574-psqf5\" (UID: \"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3\") " pod="kube-system/coredns-76f75df574-psqf5" Jan 15 12:51:15.045836 kubelet[3294]: I0115 12:51:14.835477 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7lg\" (UniqueName: \"kubernetes.io/projected/dd1c71d7-7e6f-4176-b57a-00a3f42a566e-kube-api-access-6q7lg\") pod \"calico-apiserver-64bc7cdd6d-87bn5\" (UID: \"dd1c71d7-7e6f-4176-b57a-00a3f42a566e\") " pod="calico-apiserver/calico-apiserver-64bc7cdd6d-87bn5" Jan 15 12:51:15.045836 kubelet[3294]: I0115 12:51:14.835499 3294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjws\" (UniqueName: \"kubernetes.io/projected/b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace-kube-api-access-kbjws\") pod \"calico-apiserver-64bc7cdd6d-9bxp5\" (UID: \"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace\") " pod="calico-apiserver/calico-apiserver-64bc7cdd6d-9bxp5" Jan 15 12:51:14.789107 systemd[1]: Created slice kubepods-besteffort-pode669e4c4_4197_45bf_985d_bac7d7c912eb.slice - libcontainer container kubepods-besteffort-pode669e4c4_4197_45bf_985d_bac7d7c912eb.slice. Jan 15 12:51:14.798678 systemd[1]: Created slice kubepods-besteffort-poddd1c71d7_7e6f_4176_b57a_00a3f42a566e.slice - libcontainer container kubepods-besteffort-poddd1c71d7_7e6f_4176_b57a_00a3f42a566e.slice. Jan 15 12:51:14.805946 systemd[1]: Created slice kubepods-besteffort-podb898b8c3_71eb_4b4c_8c8f_7b3ba0996ace.slice - libcontainer container kubepods-besteffort-podb898b8c3_71eb_4b4c_8c8f_7b3ba0996ace.slice. Jan 15 12:51:15.346403 containerd[1744]: time="2025-01-15T12:51:15.346064134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-psqf5,Uid:5aee6d16-2980-4221-9d0f-dbd0eaab2ab3,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:15.348342 containerd[1744]: time="2025-01-15T12:51:15.348083698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-697f49564b-jvb87,Uid:e669e4c4-4197-45bf-985d-bac7d7c912eb,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:15.354304 containerd[1744]: time="2025-01-15T12:51:15.354270789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-87bn5,Uid:dd1c71d7-7e6f-4176-b57a-00a3f42a566e,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:51:15.369705 containerd[1744]: time="2025-01-15T12:51:15.369608536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-9bxp5,Uid:b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:51:15.376506 containerd[1744]: time="2025-01-15T12:51:15.376465989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hpmct,Uid:22ef187a-1055-4e64-99a2-f81f790e5b7a,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:15.406898 kubelet[3294]: I0115 12:51:15.406216 3294 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:15.861934 containerd[1744]: time="2025-01-15T12:51:15.861764307Z" level=info msg="shim disconnected" id=3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488 namespace=k8s.io Jan 15 12:51:15.861934 containerd[1744]: time="2025-01-15T12:51:15.861892827Z" level=warning msg="cleaning up after shim disconnected" id=3a00d59f38fd0b62062811ecb29fe158d753464a40d9176e7ffa3a02be608488 namespace=k8s.io Jan 15 12:51:15.861934 containerd[1744]: time="2025-01-15T12:51:15.861922747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:51:15.873048 containerd[1744]: time="2025-01-15T12:51:15.872866487Z" level=warning msg="cleanup warnings time=\"2025-01-15T12:51:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 15 12:51:15.900802 containerd[1744]: time="2025-01-15T12:51:15.900516857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 15 12:51:16.194571 containerd[1744]: time="2025-01-15T12:51:16.194512709Z" level=error msg="Failed to destroy network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.195617 containerd[1744]: time="2025-01-15T12:51:16.195559111Z" level=error msg="encountered an error cleaning up failed sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.196593 containerd[1744]: time="2025-01-15T12:51:16.195630791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-697f49564b-jvb87,Uid:e669e4c4-4197-45bf-985d-bac7d7c912eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.197740 kubelet[3294]: E0115 12:51:16.197716 3294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.198188 kubelet[3294]: E0115 12:51:16.198170 3294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-697f49564b-jvb87" Jan 15 12:51:16.198283 kubelet[3294]: E0115 12:51:16.198272 3294 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-697f49564b-jvb87" Jan 15 12:51:16.198405 kubelet[3294]: E0115 12:51:16.198394 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-697f49564b-jvb87_calico-system(e669e4c4-4197-45bf-985d-bac7d7c912eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-697f49564b-jvb87_calico-system(e669e4c4-4197-45bf-985d-bac7d7c912eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-697f49564b-jvb87" podUID="e669e4c4-4197-45bf-985d-bac7d7c912eb" Jan 15 12:51:16.201692 containerd[1744]: time="2025-01-15T12:51:16.201652242Z" level=error msg="Failed to destroy network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.202926 containerd[1744]: time="2025-01-15T12:51:16.202591764Z" level=error msg="encountered an error cleaning up failed sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.204064 containerd[1744]: time="2025-01-15T12:51:16.203240605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hpmct,Uid:22ef187a-1055-4e64-99a2-f81f790e5b7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.205179 kubelet[3294]: E0115 12:51:16.205141 3294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.205283 kubelet[3294]: E0115 12:51:16.205257 3294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hpmct" Jan 15 12:51:16.205283 kubelet[3294]: E0115 12:51:16.205283 3294 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hpmct" Jan 15 12:51:16.205643 kubelet[3294]: E0115 12:51:16.205407 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hpmct_kube-system(22ef187a-1055-4e64-99a2-f81f790e5b7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hpmct_kube-system(22ef187a-1055-4e64-99a2-f81f790e5b7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hpmct" podUID="22ef187a-1055-4e64-99a2-f81f790e5b7a" Jan 15 12:51:16.222743 containerd[1744]: time="2025-01-15T12:51:16.222693520Z" level=error msg="Failed to destroy network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.223479 containerd[1744]: time="2025-01-15T12:51:16.223431401Z" level=error msg="encountered an error cleaning up failed sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.223570 containerd[1744]: time="2025-01-15T12:51:16.223498281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-psqf5,Uid:5aee6d16-2980-4221-9d0f-dbd0eaab2ab3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.224133 kubelet[3294]: E0115 12:51:16.223755 3294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.224133 kubelet[3294]: E0115 12:51:16.223817 3294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-psqf5" Jan 15 12:51:16.224133 kubelet[3294]: E0115 12:51:16.223838 3294 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-psqf5" Jan 15 12:51:16.224290 kubelet[3294]: E0115 12:51:16.223887 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-psqf5_kube-system(5aee6d16-2980-4221-9d0f-dbd0eaab2ab3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-psqf5_kube-system(5aee6d16-2980-4221-9d0f-dbd0eaab2ab3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-psqf5" podUID="5aee6d16-2980-4221-9d0f-dbd0eaab2ab3" Jan 15 12:51:16.226871 containerd[1744]: time="2025-01-15T12:51:16.226826007Z" level=error msg="Failed to destroy network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.227559 containerd[1744]: time="2025-01-15T12:51:16.227368848Z" level=error msg="encountered an error cleaning up failed sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.227559 containerd[1744]: time="2025-01-15T12:51:16.227470289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-87bn5,Uid:dd1c71d7-7e6f-4176-b57a-00a3f42a566e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.228058 kubelet[3294]: E0115 12:51:16.227830 3294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.228058 kubelet[3294]: E0115 12:51:16.227882 3294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-87bn5" Jan 15 12:51:16.228058 kubelet[3294]: E0115 12:51:16.227974 3294 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-87bn5" Jan 15 12:51:16.228173 kubelet[3294]: E0115 12:51:16.228082 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64bc7cdd6d-87bn5_calico-apiserver(dd1c71d7-7e6f-4176-b57a-00a3f42a566e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64bc7cdd6d-87bn5_calico-apiserver(dd1c71d7-7e6f-4176-b57a-00a3f42a566e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-87bn5" podUID="dd1c71d7-7e6f-4176-b57a-00a3f42a566e" Jan 15 12:51:16.237645 containerd[1744]: time="2025-01-15T12:51:16.237594227Z" level=error msg="Failed to destroy network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.238373 containerd[1744]: time="2025-01-15T12:51:16.238239468Z" level=error msg="encountered an error cleaning up failed sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.238373 containerd[1744]: time="2025-01-15T12:51:16.238313548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-9bxp5,Uid:b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.238973 kubelet[3294]: E0115 12:51:16.238915 3294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.238973 kubelet[3294]: E0115 12:51:16.238970 3294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-9bxp5" Jan 15 12:51:16.240609 kubelet[3294]: E0115 12:51:16.239039 3294 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-9bxp5" Jan 15 12:51:16.240609 kubelet[3294]: E0115 12:51:16.239293 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64bc7cdd6d-9bxp5_calico-apiserver(b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64bc7cdd6d-9bxp5_calico-apiserver(b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-9bxp5" podUID="b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace" Jan 15 12:51:16.718531 systemd[1]: Created slice kubepods-besteffort-pod7ec3a0a6_7f52_426d_bd6b_d0c0bd61ff03.slice - libcontainer container kubepods-besteffort-pod7ec3a0a6_7f52_426d_bd6b_d0c0bd61ff03.slice. Jan 15 12:51:16.720948 containerd[1744]: time="2025-01-15T12:51:16.720897141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbhfp,Uid:7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:16.827828 containerd[1744]: time="2025-01-15T12:51:16.827780695Z" level=error msg="Failed to destroy network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.828682 containerd[1744]: time="2025-01-15T12:51:16.828412856Z" level=error msg="encountered an error cleaning up failed sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.828682 containerd[1744]: time="2025-01-15T12:51:16.828492176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbhfp,Uid:7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.830612 kubelet[3294]: E0115 12:51:16.828928 3294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:16.830612 kubelet[3294]: E0115 12:51:16.829052 3294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbhfp" Jan 15 12:51:16.830612 kubelet[3294]: E0115 12:51:16.829074 3294 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbhfp" Jan 15 12:51:16.831553 kubelet[3294]: E0115 12:51:16.829192 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbhfp_calico-system(7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbhfp_calico-system(7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:16.901816 kubelet[3294]: I0115 12:51:16.901772 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:16.903817 containerd[1744]: time="2025-01-15T12:51:16.903228591Z" level=info msg="StopPodSandbox for \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\"" Jan 15 12:51:16.906260 containerd[1744]: time="2025-01-15T12:51:16.904896034Z" level=info msg="Ensure that sandbox 1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c in task-service has been cleanup successfully" Jan 15 12:51:16.906399 kubelet[3294]: I0115 12:51:16.905315 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:16.907151 kubelet[3294]: I0115 12:51:16.906719 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:16.910193 containerd[1744]: time="2025-01-15T12:51:16.908734281Z" level=info msg="StopPodSandbox for \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\"" Jan 15 12:51:16.910193 containerd[1744]: time="2025-01-15T12:51:16.908900521Z" level=info msg="Ensure that sandbox 8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599 in task-service has been cleanup successfully" Jan 15 12:51:16.910827 containerd[1744]: time="2025-01-15T12:51:16.910740765Z" level=info msg="StopPodSandbox for \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\"" Jan 15 12:51:16.913610 containerd[1744]: time="2025-01-15T12:51:16.913496210Z" level=info msg="Ensure that sandbox a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c in task-service has been cleanup successfully" Jan 15 12:51:16.925166 kubelet[3294]: I0115 12:51:16.925130 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:16.933862 containerd[1744]: time="2025-01-15T12:51:16.933816007Z" level=info msg="StopPodSandbox for \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\"" Jan 15 12:51:16.934728 containerd[1744]: time="2025-01-15T12:51:16.934697928Z" level=info msg="Ensure that sandbox b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84 in task-service has been cleanup successfully" Jan 15 12:51:16.936886 kubelet[3294]: I0115 12:51:16.936756 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:16.940431 containerd[1744]: time="2025-01-15T12:51:16.940351898Z" level=info msg="StopPodSandbox for \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\"" Jan 15 12:51:16.941078 containerd[1744]: time="2025-01-15T12:51:16.940868259Z" level=info msg="Ensure that sandbox 02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900 in task-service has been cleanup successfully" Jan 15 12:51:16.948520 kubelet[3294]: I0115 12:51:16.948469 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:16.949445 containerd[1744]: time="2025-01-15T12:51:16.949260035Z" level=info msg="StopPodSandbox for \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\"" Jan 15 12:51:16.949574 containerd[1744]: time="2025-01-15T12:51:16.949478795Z" level=info msg="Ensure that sandbox 7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1 in task-service has been cleanup successfully" Jan 15 12:51:16.995219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c-shm.mount: Deactivated successfully. Jan 15 12:51:16.995320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1-shm.mount: Deactivated successfully. Jan 15 12:51:16.995580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900-shm.mount: Deactivated successfully. Jan 15 12:51:17.011764 containerd[1744]: time="2025-01-15T12:51:17.011688627Z" level=error msg="StopPodSandbox for \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\" failed" error="failed to destroy network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:17.012429 kubelet[3294]: E0115 12:51:17.012071 3294 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:17.012429 kubelet[3294]: E0115 12:51:17.012157 3294 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84"} Jan 15 12:51:17.012429 kubelet[3294]: E0115 12:51:17.012198 3294 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22ef187a-1055-4e64-99a2-f81f790e5b7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:17.012429 kubelet[3294]: E0115 12:51:17.012228 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22ef187a-1055-4e64-99a2-f81f790e5b7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hpmct" podUID="22ef187a-1055-4e64-99a2-f81f790e5b7a" Jan 15 12:51:17.049508 containerd[1744]: time="2025-01-15T12:51:17.049284575Z" level=error msg="StopPodSandbox for \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\" failed" error="failed to destroy network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:17.051124 kubelet[3294]: E0115 12:51:17.049730 3294 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:17.051124 kubelet[3294]: E0115 12:51:17.049776 3294 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1"} Jan 15 12:51:17.051124 kubelet[3294]: E0115 12:51:17.049815 3294 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:17.051124 kubelet[3294]: E0115 12:51:17.049847 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-psqf5" podUID="5aee6d16-2980-4221-9d0f-dbd0eaab2ab3" Jan 15 12:51:17.051769 containerd[1744]: time="2025-01-15T12:51:17.051675220Z" level=error msg="StopPodSandbox for \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\" failed" error="failed to destroy network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:17.053399 kubelet[3294]: E0115 12:51:17.053167 3294 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:17.053399 kubelet[3294]: E0115 12:51:17.053257 3294 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c"} Jan 15 12:51:17.053399 kubelet[3294]: E0115 12:51:17.053300 3294 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd1c71d7-7e6f-4176-b57a-00a3f42a566e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:17.053399 kubelet[3294]: E0115 12:51:17.053371 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd1c71d7-7e6f-4176-b57a-00a3f42a566e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-87bn5" podUID="dd1c71d7-7e6f-4176-b57a-00a3f42a566e" Jan 15 12:51:17.056360 containerd[1744]: time="2025-01-15T12:51:17.056262388Z" level=error msg="StopPodSandbox for \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\" failed" error="failed to destroy network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:17.057244 kubelet[3294]: E0115 12:51:17.056903 3294 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:17.057244 kubelet[3294]: E0115 12:51:17.056951 3294 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599"} Jan 15 12:51:17.057244 kubelet[3294]: E0115 12:51:17.057168 3294 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:17.057244 kubelet[3294]: E0115 12:51:17.057203 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-9bxp5" podUID="b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace" Jan 15 12:51:17.066798 containerd[1744]: time="2025-01-15T12:51:17.066254046Z" level=error msg="StopPodSandbox for \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\" failed" error="failed to destroy network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:17.067110 kubelet[3294]: E0115 12:51:17.066572 3294 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:17.067110 kubelet[3294]: E0115 12:51:17.066615 3294 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c"} Jan 15 12:51:17.067110 kubelet[3294]: E0115 12:51:17.066649 3294 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:17.067110 kubelet[3294]: E0115 12:51:17.066683 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbhfp" podUID="7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03" Jan 15 12:51:17.069252 containerd[1744]: time="2025-01-15T12:51:17.068663411Z" level=error msg="StopPodSandbox for \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\" failed" error="failed to destroy network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:17.069412 kubelet[3294]: E0115 12:51:17.068951 3294 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:17.069412 kubelet[3294]: E0115 12:51:17.069105 3294 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900"} Jan 15 12:51:17.069412 kubelet[3294]: E0115 12:51:17.069150 3294 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e669e4c4-4197-45bf-985d-bac7d7c912eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:17.069412 kubelet[3294]: E0115 12:51:17.069227 3294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e669e4c4-4197-45bf-985d-bac7d7c912eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-697f49564b-jvb87" podUID="e669e4c4-4197-45bf-985d-bac7d7c912eb" Jan 15 12:51:19.841022 kernel: irq 12: nobody cared (try booting with the "irqpoll" option) Jan 15 12:51:19.841156 kernel: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.6.71-flatcar #1 Jan 15 12:51:19.841179 kernel: Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:51:19.841196 kernel: Call trace: Jan 15 12:51:19.841211 kernel: dump_backtrace+0x98/0x118 Jan 15 12:51:19.841227 kernel: show_stack+0x18/0x24 Jan 15 12:51:19.841243 kernel: dump_stack_lvl+0x48/0x60 Jan 15 12:51:19.841280 kernel: dump_stack+0x18/0x24 Jan 15 12:51:19.841298 kernel: __report_bad_irq+0x38/0xe0 Jan 15 12:51:19.841313 kernel: note_interrupt+0x308/0x358 Jan 15 12:51:19.841331 kernel: handle_irq_event+0x9c/0xb0 Jan 15 12:51:19.841346 kernel: handle_fasteoi_irq+0xa4/0x230 Jan 15 12:51:19.841359 kernel: generic_handle_domain_irq+0x2c/0x44 Jan 15 12:51:19.841370 kernel: gic_handle_irq+0x50/0x12c Jan 15 12:51:19.841384 kernel: do_interrupt_handler+0x50/0x84 Jan 15 12:51:19.841395 kernel: el1_interrupt+0x34/0x68 Jan 15 12:51:19.841410 kernel: el1h_64_irq_handler+0x18/0x24 Jan 15 12:51:19.841423 kernel: el1h_64_irq+0x64/0x68 Jan 15 12:51:19.841438 kernel: handle_softirqs+0xa8/0x364 Jan 15 12:51:19.841455 kernel: __do_softirq+0x14/0x20 Jan 15 12:51:19.841471 kernel: ____do_softirq+0x10/0x1c Jan 15 12:51:19.841486 kernel: call_on_irq_stack+0x24/0x4c Jan 15 12:51:19.841499 kernel: do_softirq_own_stack+0x1c/0x28 Jan 15 12:51:19.841512 kernel: irq_exit_rcu+0xbc/0xd8 Jan 15 12:51:19.841526 kernel: el1_interrupt+0x38/0x68 Jan 15 12:51:19.841539 kernel: el1h_64_irq_handler+0x18/0x24 Jan 15 12:51:19.841550 kernel: el1h_64_irq+0x64/0x68 Jan 15 12:51:19.841562 kernel: default_idle_call+0x54/0x15c Jan 15 12:51:19.841577 kernel: do_idle+0x20c/0x264 Jan 15 12:51:19.841594 kernel: cpu_startup_entry+0x38/0x3c Jan 15 12:51:19.841610 kernel: kernel_init+0x0/0x1e0 Jan 15 12:51:19.841623 kernel: arch_post_acpi_subsys_init+0x0/0x8 Jan 15 12:51:19.841636 kernel: start_kernel+0x548/0x6e4 Jan 15 12:51:19.841652 kernel: __primary_switched+0xbc/0xc4 Jan 15 12:51:19.841668 kernel: handlers: Jan 15 12:51:19.841683 kernel: [<0000000062f5680d>] pl011_int Jan 15 12:51:19.841696 kernel: Disabling IRQ #12 Jan 15 12:51:22.326707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679441031.mount: Deactivated successfully. Jan 15 12:51:22.381972 containerd[1744]: time="2025-01-15T12:51:22.381924693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:22.385603 containerd[1744]: time="2025-01-15T12:51:22.385567660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 15 12:51:22.389560 containerd[1744]: time="2025-01-15T12:51:22.389524947Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:22.395253 containerd[1744]: time="2025-01-15T12:51:22.394803317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:22.396066 containerd[1744]: time="2025-01-15T12:51:22.395669918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.494949381s" Jan 15 12:51:22.396181 containerd[1744]: time="2025-01-15T12:51:22.396067359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 15 12:51:22.413612 containerd[1744]: time="2025-01-15T12:51:22.413509712Z" level=info msg="CreateContainer within sandbox \"2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 12:51:22.452124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177530935.mount: Deactivated successfully. Jan 15 12:51:22.464016 containerd[1744]: time="2025-01-15T12:51:22.463956686Z" level=info msg="CreateContainer within sandbox \"2edd048e50f75bde1e47bfb1a9645614da220f630ffdf360a543e555ed3bcdcb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c722c03e43c4ce36de98062da3ebf5086370689fe6b217c6f12327bb35144c37\"" Jan 15 12:51:22.464676 containerd[1744]: time="2025-01-15T12:51:22.464639047Z" level=info msg="StartContainer for \"c722c03e43c4ce36de98062da3ebf5086370689fe6b217c6f12327bb35144c37\"" Jan 15 12:51:22.490420 systemd[1]: Started cri-containerd-c722c03e43c4ce36de98062da3ebf5086370689fe6b217c6f12327bb35144c37.scope - libcontainer container c722c03e43c4ce36de98062da3ebf5086370689fe6b217c6f12327bb35144c37. Jan 15 12:51:22.532910 containerd[1744]: time="2025-01-15T12:51:22.532297933Z" level=info msg="StartContainer for \"c722c03e43c4ce36de98062da3ebf5086370689fe6b217c6f12327bb35144c37\" returns successfully" Jan 15 12:51:22.831350 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 12:51:22.831509 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 12:51:22.997266 kubelet[3294]: I0115 12:51:22.997137 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kmqvp" podStartSLOduration=2.227383028 podStartE2EDuration="18.99696388s" podCreationTimestamp="2025-01-15 12:51:04 +0000 UTC" firstStartedPulling="2025-01-15 12:51:05.626963068 +0000 UTC m=+24.016533628" lastFinishedPulling="2025-01-15 12:51:22.39654392 +0000 UTC m=+40.786114480" observedRunningTime="2025-01-15 12:51:22.996826799 +0000 UTC m=+41.386397359" watchObservedRunningTime="2025-01-15 12:51:22.99696388 +0000 UTC m=+41.386534440" Jan 15 12:51:24.436022 kernel: bpftool[4563]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 15 12:51:24.698584 systemd-networkd[1609]: vxlan.calico: Link UP Jan 15 12:51:24.698595 systemd-networkd[1609]: vxlan.calico: Gained carrier Jan 15 12:51:25.963240 systemd-networkd[1609]: vxlan.calico: Gained IPv6LL Jan 15 12:51:28.716471 containerd[1744]: time="2025-01-15T12:51:28.716156063Z" level=info msg="StopPodSandbox for \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\"" Jan 15 12:51:28.716471 containerd[1744]: time="2025-01-15T12:51:28.716257983Z" level=info msg="StopPodSandbox for \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\"" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.791 [INFO][4667] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.792 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" iface="eth0" netns="/var/run/netns/cni-3695aabd-256b-0b20-8e14-9580ee616a13" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.792 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" iface="eth0" netns="/var/run/netns/cni-3695aabd-256b-0b20-8e14-9580ee616a13" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.794 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" iface="eth0" netns="/var/run/netns/cni-3695aabd-256b-0b20-8e14-9580ee616a13" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.794 [INFO][4667] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.794 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.838 [INFO][4680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.839 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.839 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.847 [WARNING][4680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.847 [INFO][4680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.849 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:28.852476 containerd[1744]: 2025-01-15 12:51:28.851 [INFO][4667] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:28.852926 containerd[1744]: time="2025-01-15T12:51:28.852739918Z" level=info msg="TearDown network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\" successfully" Jan 15 12:51:28.852926 containerd[1744]: time="2025-01-15T12:51:28.852768918Z" level=info msg="StopPodSandbox for \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\" returns successfully" Jan 15 12:51:28.855435 systemd[1]: run-netns-cni\x2d3695aabd\x2d256b\x2d0b20\x2d8e14\x2d9580ee616a13.mount: Deactivated successfully. Jan 15 12:51:28.862086 containerd[1744]: time="2025-01-15T12:51:28.861725095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-87bn5,Uid:dd1c71d7-7e6f-4176-b57a-00a3f42a566e,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.796 [INFO][4668] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.797 [INFO][4668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" iface="eth0" netns="/var/run/netns/cni-cffac8d5-82be-12fb-43a4-37d08a5a44da" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.797 [INFO][4668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" iface="eth0" netns="/var/run/netns/cni-cffac8d5-82be-12fb-43a4-37d08a5a44da" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.798 [INFO][4668] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" iface="eth0" netns="/var/run/netns/cni-cffac8d5-82be-12fb-43a4-37d08a5a44da" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.798 [INFO][4668] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.798 [INFO][4668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.842 [INFO][4679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.843 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.849 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.864 [WARNING][4679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.864 [INFO][4679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.865 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:28.869341 containerd[1744]: 2025-01-15 12:51:28.867 [INFO][4668] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:28.870645 containerd[1744]: time="2025-01-15T12:51:28.869448269Z" level=info msg="TearDown network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\" successfully" Jan 15 12:51:28.870645 containerd[1744]: time="2025-01-15T12:51:28.869474589Z" level=info msg="StopPodSandbox for \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\" returns successfully" Jan 15 12:51:28.872242 containerd[1744]: time="2025-01-15T12:51:28.872203634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbhfp,Uid:7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03,Namespace:calico-system,Attempt:1,}" Jan 15 12:51:28.873105 systemd[1]: run-netns-cni\x2dcffac8d5\x2d82be\x2d12fb\x2d43a4\x2d37d08a5a44da.mount: Deactivated successfully. Jan 15 12:51:30.060556 systemd-networkd[1609]: cali1a24b6a8301: Link UP Jan 15 12:51:30.060808 systemd-networkd[1609]: cali1a24b6a8301: Gained carrier Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:29.963 [INFO][4704] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0 csi-node-driver- calico-system 7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03 748 0 2025-01-15 12:51:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-b8bd16053a csi-node-driver-qbhfp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a24b6a8301 [] []}} ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:29.963 [INFO][4704] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.004 [INFO][4713] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" HandleID="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.021 [INFO][4713] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" HandleID="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b8bd16053a", "pod":"csi-node-driver-qbhfp", "timestamp":"2025-01-15 12:51:30.004951508 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b8bd16053a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.021 [INFO][4713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.022 [INFO][4713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.022 [INFO][4713] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b8bd16053a' Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.024 [INFO][4713] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.029 [INFO][4713] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.033 [INFO][4713] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.035 [INFO][4713] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.037 [INFO][4713] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.037 [INFO][4713] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.039 [INFO][4713] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.043 [INFO][4713] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.052 [INFO][4713] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.193/26] block=192.168.3.192/26 handle="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.052 [INFO][4713] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.193/26] handle="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.052 [INFO][4713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.089601 containerd[1744]: 2025-01-15 12:51:30.052 [INFO][4713] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.193/26] IPv6=[] ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" HandleID="k8s-pod-network.26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:30.093213 containerd[1744]: 2025-01-15 12:51:30.055 [INFO][4704] cni-plugin/k8s.go 386: Populated endpoint ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"", Pod:"csi-node-driver-qbhfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a24b6a8301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.093213 containerd[1744]: 2025-01-15 12:51:30.055 [INFO][4704] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.193/32] ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:30.093213 containerd[1744]: 2025-01-15 12:51:30.055 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a24b6a8301 ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:30.093213 containerd[1744]: 2025-01-15 12:51:30.062 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:30.093213 containerd[1744]: 2025-01-15 12:51:30.062 [INFO][4704] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b", Pod:"csi-node-driver-qbhfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a24b6a8301", MAC:"66:cc:b0:bf:1d:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.093213 containerd[1744]: 2025-01-15 12:51:30.085 [INFO][4704] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b" Namespace="calico-system" Pod="csi-node-driver-qbhfp" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:30.135841 containerd[1744]: time="2025-01-15T12:51:30.134976639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:30.135841 containerd[1744]: time="2025-01-15T12:51:30.135074439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:30.135841 containerd[1744]: time="2025-01-15T12:51:30.135089359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.135841 containerd[1744]: time="2025-01-15T12:51:30.135177799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.167513 systemd-networkd[1609]: calic32203b1025: Link UP Jan 15 12:51:30.167651 systemd-networkd[1609]: calic32203b1025: Gained carrier Jan 15 12:51:30.187194 systemd[1]: Started cri-containerd-26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b.scope - libcontainer container 26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b. Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:29.969 [INFO][4691] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0 calico-apiserver-64bc7cdd6d- calico-apiserver dd1c71d7-7e6f-4176-b57a-00a3f42a566e 747 0 2025-01-15 12:51:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64bc7cdd6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b8bd16053a calico-apiserver-64bc7cdd6d-87bn5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic32203b1025 [] []}} ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:29.969 [INFO][4691] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.007 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" HandleID="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.022 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" HandleID="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b8bd16053a", "pod":"calico-apiserver-64bc7cdd6d-87bn5", "timestamp":"2025-01-15 12:51:30.007112792 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b8bd16053a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.022 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.052 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.052 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b8bd16053a' Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.055 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.072 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.104 [INFO][4717] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.108 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.113 [INFO][4717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.116 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.127 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.145 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.159 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.194/26] block=192.168.3.192/26 handle="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.159 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.194/26] handle="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.160 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.206036 containerd[1744]: 2025-01-15 12:51:30.160 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.194/26] IPv6=[] ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" HandleID="k8s-pod-network.31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:30.206570 containerd[1744]: 2025-01-15 12:51:30.165 [INFO][4691] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd1c71d7-7e6f-4176-b57a-00a3f42a566e", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"", Pod:"calico-apiserver-64bc7cdd6d-87bn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32203b1025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.206570 containerd[1744]: 2025-01-15 12:51:30.165 [INFO][4691] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.194/32] ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:30.206570 containerd[1744]: 2025-01-15 12:51:30.165 [INFO][4691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic32203b1025 ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:30.206570 containerd[1744]: 2025-01-15 12:51:30.167 [INFO][4691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:30.206570 containerd[1744]: 2025-01-15 12:51:30.169 [INFO][4691] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd1c71d7-7e6f-4176-b57a-00a3f42a566e", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd", Pod:"calico-apiserver-64bc7cdd6d-87bn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32203b1025", MAC:"46:47:13:de:a7:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:30.206570 containerd[1744]: 2025-01-15 12:51:30.201 [INFO][4691] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-87bn5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:30.255071 containerd[1744]: time="2025-01-15T12:51:30.252338309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:30.255071 containerd[1744]: time="2025-01-15T12:51:30.252404029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:30.255071 containerd[1744]: time="2025-01-15T12:51:30.252419109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.255071 containerd[1744]: time="2025-01-15T12:51:30.252501590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:30.256454 containerd[1744]: time="2025-01-15T12:51:30.256253876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbhfp,Uid:7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03,Namespace:calico-system,Attempt:1,} returns sandbox id \"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b\"" Jan 15 12:51:30.262372 containerd[1744]: time="2025-01-15T12:51:30.262330606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 15 12:51:30.276200 systemd[1]: Started cri-containerd-31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd.scope - libcontainer container 31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd. Jan 15 12:51:30.316799 containerd[1744]: time="2025-01-15T12:51:30.316403973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-87bn5,Uid:dd1c71d7-7e6f-4176-b57a-00a3f42a566e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd\"" Jan 15 12:51:30.714212 containerd[1744]: time="2025-01-15T12:51:30.712818576Z" level=info msg="StopPodSandbox for \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\"" Jan 15 12:51:30.714212 containerd[1744]: time="2025-01-15T12:51:30.713894017Z" level=info msg="StopPodSandbox for \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\"" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.775 [INFO][4873] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.775 [INFO][4873] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" iface="eth0" netns="/var/run/netns/cni-89582ade-8a8d-24eb-5c75-fa4f9e5d67f1" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.775 [INFO][4873] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" iface="eth0" netns="/var/run/netns/cni-89582ade-8a8d-24eb-5c75-fa4f9e5d67f1" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.775 [INFO][4873] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" iface="eth0" netns="/var/run/netns/cni-89582ade-8a8d-24eb-5c75-fa4f9e5d67f1" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.775 [INFO][4873] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.775 [INFO][4873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.803 [INFO][4881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.803 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.803 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.815 [WARNING][4881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.815 [INFO][4881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.817 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.822505 containerd[1744]: 2025-01-15 12:51:30.819 [INFO][4873] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:30.822505 containerd[1744]: time="2025-01-15T12:51:30.822320913Z" level=info msg="TearDown network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\" successfully" Jan 15 12:51:30.822505 containerd[1744]: time="2025-01-15T12:51:30.822349393Z" level=info msg="StopPodSandbox for \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\" returns successfully" Jan 15 12:51:30.823900 containerd[1744]: time="2025-01-15T12:51:30.823245995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-9bxp5,Uid:b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.780 [INFO][4865] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.781 [INFO][4865] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" iface="eth0" netns="/var/run/netns/cni-97fa3abb-b8cc-e001-6476-269e2895f264" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.781 [INFO][4865] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" iface="eth0" netns="/var/run/netns/cni-97fa3abb-b8cc-e001-6476-269e2895f264" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.781 [INFO][4865] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" iface="eth0" netns="/var/run/netns/cni-97fa3abb-b8cc-e001-6476-269e2895f264" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.781 [INFO][4865] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.781 [INFO][4865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.811 [INFO][4885] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.811 [INFO][4885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.817 [INFO][4885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.828 [WARNING][4885] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.828 [INFO][4885] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.830 [INFO][4885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:30.833306 containerd[1744]: 2025-01-15 12:51:30.831 [INFO][4865] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:30.834451 containerd[1744]: time="2025-01-15T12:51:30.834108092Z" level=info msg="TearDown network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\" successfully" Jan 15 12:51:30.834451 containerd[1744]: time="2025-01-15T12:51:30.834142092Z" level=info msg="StopPodSandbox for \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\" returns successfully" Jan 15 12:51:30.835589 containerd[1744]: time="2025-01-15T12:51:30.835420734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-psqf5,Uid:5aee6d16-2980-4221-9d0f-dbd0eaab2ab3,Namespace:kube-system,Attempt:1,}" Jan 15 12:51:30.908606 systemd[1]: run-netns-cni\x2d89582ade\x2d8a8d\x2d24eb\x2d5c75\x2dfa4f9e5d67f1.mount: Deactivated successfully. Jan 15 12:51:30.908707 systemd[1]: run-netns-cni\x2d97fa3abb\x2db8cc\x2de001\x2d6476\x2d269e2895f264.mount: Deactivated successfully. Jan 15 12:51:31.059704 systemd-networkd[1609]: calia9a6c4915bf: Link UP Jan 15 12:51:31.062825 systemd-networkd[1609]: calia9a6c4915bf: Gained carrier Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:30.952 [INFO][4895] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0 calico-apiserver-64bc7cdd6d- calico-apiserver b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace 764 0 2025-01-15 12:51:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64bc7cdd6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b8bd16053a calico-apiserver-64bc7cdd6d-9bxp5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia9a6c4915bf [] []}} ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:30.953 [INFO][4895] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:30.997 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" HandleID="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.010 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" HandleID="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b8bd16053a", "pod":"calico-apiserver-64bc7cdd6d-9bxp5", "timestamp":"2025-01-15 12:51:30.997277957 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b8bd16053a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.010 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.010 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.010 [INFO][4917] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b8bd16053a' Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.012 [INFO][4917] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.016 [INFO][4917] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.022 [INFO][4917] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.025 [INFO][4917] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.027 [INFO][4917] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.027 [INFO][4917] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.029 [INFO][4917] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.037 [INFO][4917] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.051 [INFO][4917] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.195/26] block=192.168.3.192/26 handle="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.051 [INFO][4917] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.195/26] handle="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.051 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:31.083307 containerd[1744]: 2025-01-15 12:51:31.051 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.195/26] IPv6=[] ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" HandleID="k8s-pod-network.abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:31.083848 containerd[1744]: 2025-01-15 12:51:31.054 [INFO][4895] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"", Pod:"calico-apiserver-64bc7cdd6d-9bxp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9a6c4915bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:31.083848 containerd[1744]: 2025-01-15 12:51:31.054 [INFO][4895] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.195/32] ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:31.083848 containerd[1744]: 2025-01-15 12:51:31.054 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9a6c4915bf ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:31.083848 containerd[1744]: 2025-01-15 12:51:31.063 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:31.083848 containerd[1744]: 2025-01-15 12:51:31.064 [INFO][4895] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a", Pod:"calico-apiserver-64bc7cdd6d-9bxp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9a6c4915bf", MAC:"ae:19:7a:3a:d6:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:31.083848 containerd[1744]: 2025-01-15 12:51:31.079 [INFO][4895] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a" Namespace="calico-apiserver" Pod="calico-apiserver-64bc7cdd6d-9bxp5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:31.119737 systemd-networkd[1609]: cali437d500f9ab: Link UP Jan 15 12:51:31.120260 systemd-networkd[1609]: cali437d500f9ab: Gained carrier Jan 15 12:51:31.131106 containerd[1744]: time="2025-01-15T12:51:31.130077292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:31.131106 containerd[1744]: time="2025-01-15T12:51:31.130238652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:31.131106 containerd[1744]: time="2025-01-15T12:51:31.130252172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:31.133323 containerd[1744]: time="2025-01-15T12:51:31.130773893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:30.987 [INFO][4906] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0 coredns-76f75df574- kube-system 5aee6d16-2980-4221-9d0f-dbd0eaab2ab3 765 0 2025-01-15 12:50:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b8bd16053a coredns-76f75df574-psqf5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali437d500f9ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:30.987 [INFO][4906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.027 [INFO][4926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" HandleID="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.042 [INFO][4926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" HandleID="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000303520), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b8bd16053a", "pod":"coredns-76f75df574-psqf5", "timestamp":"2025-01-15 12:51:31.027462726 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b8bd16053a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.043 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.051 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.051 [INFO][4926] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b8bd16053a' Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.055 [INFO][4926] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.063 [INFO][4926] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.077 [INFO][4926] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.083 [INFO][4926] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.089 [INFO][4926] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.089 [INFO][4926] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.091 [INFO][4926] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018 Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.103 [INFO][4926] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.114 [INFO][4926] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.196/26] block=192.168.3.192/26 handle="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.114 [INFO][4926] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.196/26] handle="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.114 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:31.151976 containerd[1744]: 2025-01-15 12:51:31.114 [INFO][4926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.196/26] IPv6=[] ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" HandleID="k8s-pod-network.fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:31.152687 containerd[1744]: 2025-01-15 12:51:31.116 [INFO][4906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"", Pod:"coredns-76f75df574-psqf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437d500f9ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:31.152687 containerd[1744]: 2025-01-15 12:51:31.117 [INFO][4906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.196/32] ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:31.152687 containerd[1744]: 2025-01-15 12:51:31.117 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali437d500f9ab ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:31.152687 containerd[1744]: 2025-01-15 12:51:31.120 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:31.152687 containerd[1744]: 2025-01-15 12:51:31.122 [INFO][4906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018", Pod:"coredns-76f75df574-psqf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437d500f9ab", MAC:"36:74:d6:62:76:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:31.152687 containerd[1744]: 2025-01-15 12:51:31.145 [INFO][4906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018" Namespace="kube-system" Pod="coredns-76f75df574-psqf5" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:31.177240 systemd[1]: Started cri-containerd-abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a.scope - libcontainer container abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a. Jan 15 12:51:31.191347 containerd[1744]: time="2025-01-15T12:51:31.191052031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:31.191347 containerd[1744]: time="2025-01-15T12:51:31.191231911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:31.191347 containerd[1744]: time="2025-01-15T12:51:31.191281831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:31.192978 containerd[1744]: time="2025-01-15T12:51:31.191466912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:31.211286 systemd[1]: Started cri-containerd-fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018.scope - libcontainer container fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018. Jan 15 12:51:31.229775 containerd[1744]: time="2025-01-15T12:51:31.229738934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bc7cdd6d-9bxp5,Uid:b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a\"" Jan 15 12:51:31.251716 containerd[1744]: time="2025-01-15T12:51:31.251645489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-psqf5,Uid:5aee6d16-2980-4221-9d0f-dbd0eaab2ab3,Namespace:kube-system,Attempt:1,} returns sandbox id \"fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018\"" Jan 15 12:51:31.262427 containerd[1744]: time="2025-01-15T12:51:31.262198386Z" level=info msg="CreateContainer within sandbox \"fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:51:31.334937 containerd[1744]: time="2025-01-15T12:51:31.334807144Z" level=info msg="CreateContainer within sandbox \"fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecd3234d06fe780ec25af5e726b8663cd699e69462932a9bb7c52a7f5d370928\"" Jan 15 12:51:31.337482 containerd[1744]: time="2025-01-15T12:51:31.337347268Z" level=info msg="StartContainer for \"ecd3234d06fe780ec25af5e726b8663cd699e69462932a9bb7c52a7f5d370928\"" Jan 15 12:51:31.365219 systemd[1]: Started cri-containerd-ecd3234d06fe780ec25af5e726b8663cd699e69462932a9bb7c52a7f5d370928.scope - libcontainer container ecd3234d06fe780ec25af5e726b8663cd699e69462932a9bb7c52a7f5d370928. Jan 15 12:51:31.392847 containerd[1744]: time="2025-01-15T12:51:31.392798878Z" level=info msg="StartContainer for \"ecd3234d06fe780ec25af5e726b8663cd699e69462932a9bb7c52a7f5d370928\" returns successfully" Jan 15 12:51:31.714442 containerd[1744]: time="2025-01-15T12:51:31.714082559Z" level=info msg="StopPodSandbox for \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\"" Jan 15 12:51:31.731091 containerd[1744]: time="2025-01-15T12:51:31.731052226Z" level=info msg="StopPodSandbox for \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\"" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.822 [INFO][5110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.823 [INFO][5110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" iface="eth0" netns="/var/run/netns/cni-4b14a400-a12e-c04f-c86e-5a63fb8a8cd6" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.825 [INFO][5110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" iface="eth0" netns="/var/run/netns/cni-4b14a400-a12e-c04f-c86e-5a63fb8a8cd6" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.826 [INFO][5110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" iface="eth0" netns="/var/run/netns/cni-4b14a400-a12e-c04f-c86e-5a63fb8a8cd6" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.826 [INFO][5110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.826 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.871 [INFO][5121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.871 [INFO][5121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.871 [INFO][5121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.884 [WARNING][5121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.884 [INFO][5121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.890 [INFO][5121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:31.898720 containerd[1744]: 2025-01-15 12:51:31.893 [INFO][5110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:31.899164 containerd[1744]: time="2025-01-15T12:51:31.898904738Z" level=info msg="TearDown network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\" successfully" Jan 15 12:51:31.899164 containerd[1744]: time="2025-01-15T12:51:31.898932418Z" level=info msg="StopPodSandbox for \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\" returns successfully" Jan 15 12:51:31.900383 containerd[1744]: time="2025-01-15T12:51:31.900027220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-697f49564b-jvb87,Uid:e669e4c4-4197-45bf-985d-bac7d7c912eb,Namespace:calico-system,Attempt:1,}" Jan 15 12:51:31.908278 systemd[1]: run-netns-cni\x2d4b14a400\x2da12e\x2dc04f\x2dc86e\x2d5a63fb8a8cd6.mount: Deactivated successfully. Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.830 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.830 [INFO][5109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" iface="eth0" netns="/var/run/netns/cni-ca40a972-09a3-ec7f-3972-6334d924350e" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.830 [INFO][5109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" iface="eth0" netns="/var/run/netns/cni-ca40a972-09a3-ec7f-3972-6334d924350e" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.831 [INFO][5109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" iface="eth0" netns="/var/run/netns/cni-ca40a972-09a3-ec7f-3972-6334d924350e" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.831 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.831 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.892 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.893 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.893 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.913 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.913 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.915 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:31.919320 containerd[1744]: 2025-01-15 12:51:31.917 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:31.921170 containerd[1744]: time="2025-01-15T12:51:31.921137094Z" level=info msg="TearDown network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\" successfully" Jan 15 12:51:31.922021 containerd[1744]: time="2025-01-15T12:51:31.921296375Z" level=info msg="StopPodSandbox for \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\" returns successfully" Jan 15 12:51:31.923476 systemd[1]: run-netns-cni\x2dca40a972\x2d09a3\x2dec7f\x2d3972\x2d6334d924350e.mount: Deactivated successfully. Jan 15 12:51:31.924558 containerd[1744]: time="2025-01-15T12:51:31.924513580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hpmct,Uid:22ef187a-1055-4e64-99a2-f81f790e5b7a,Namespace:kube-system,Attempt:1,}" Jan 15 12:51:31.978202 containerd[1744]: time="2025-01-15T12:51:31.978083947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:31.985657 containerd[1744]: time="2025-01-15T12:51:31.985597879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 15 12:51:32.011976 containerd[1744]: time="2025-01-15T12:51:32.010630919Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:32.016621 kubelet[3294]: I0115 12:51:32.016586 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-psqf5" podStartSLOduration=36.016541769 podStartE2EDuration="36.016541769s" podCreationTimestamp="2025-01-15 12:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:32.014154405 +0000 UTC m=+50.403724965" watchObservedRunningTime="2025-01-15 12:51:32.016541769 +0000 UTC m=+50.406112329" Jan 15 12:51:32.026820 containerd[1744]: time="2025-01-15T12:51:32.026749666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:32.029312 containerd[1744]: time="2025-01-15T12:51:32.029062549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.766685663s" Jan 15 12:51:32.029424 containerd[1744]: time="2025-01-15T12:51:32.029319870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 15 12:51:32.031552 containerd[1744]: time="2025-01-15T12:51:32.030202911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 12:51:32.037537 containerd[1744]: time="2025-01-15T12:51:32.037417403Z" level=info msg="CreateContainer within sandbox \"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 15 12:51:32.044374 systemd-networkd[1609]: cali1a24b6a8301: Gained IPv6LL Jan 15 12:51:32.044653 systemd-networkd[1609]: calic32203b1025: Gained IPv6LL Jan 15 12:51:32.116721 containerd[1744]: time="2025-01-15T12:51:32.116667691Z" level=info msg="CreateContainer within sandbox \"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d5d364a186d5297789e9343105c26997655660714b0ae1f4e45270e77c3e2393\"" Jan 15 12:51:32.117434 containerd[1744]: time="2025-01-15T12:51:32.117409293Z" level=info msg="StartContainer for \"d5d364a186d5297789e9343105c26997655660714b0ae1f4e45270e77c3e2393\"" Jan 15 12:51:32.174474 systemd[1]: Started cri-containerd-d5d364a186d5297789e9343105c26997655660714b0ae1f4e45270e77c3e2393.scope - libcontainer container d5d364a186d5297789e9343105c26997655660714b0ae1f4e45270e77c3e2393. Jan 15 12:51:32.247108 systemd-networkd[1609]: cali68fcbed395e: Link UP Jan 15 12:51:32.250125 systemd-networkd[1609]: cali68fcbed395e: Gained carrier Jan 15 12:51:32.251304 containerd[1744]: time="2025-01-15T12:51:32.251075349Z" level=info msg="StartContainer for \"d5d364a186d5297789e9343105c26997655660714b0ae1f4e45270e77c3e2393\" returns successfully" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.093 [INFO][5137] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0 calico-kube-controllers-697f49564b- calico-system e669e4c4-4197-45bf-985d-bac7d7c912eb 779 0 2025-01-15 12:51:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:697f49564b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-b8bd16053a calico-kube-controllers-697f49564b-jvb87 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali68fcbed395e [] []}} ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.093 [INFO][5137] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.159 [INFO][5164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" HandleID="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.185 [INFO][5164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" HandleID="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000223a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b8bd16053a", "pod":"calico-kube-controllers-697f49564b-jvb87", "timestamp":"2025-01-15 12:51:32.15930908 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b8bd16053a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.186 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.186 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.186 [INFO][5164] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b8bd16053a' Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.189 [INFO][5164] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.198 [INFO][5164] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.204 [INFO][5164] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.208 [INFO][5164] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.212 [INFO][5164] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.212 [INFO][5164] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.214 [INFO][5164] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67 Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.221 [INFO][5164] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.236 [INFO][5164] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.197/26] block=192.168.3.192/26 handle="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.236 [INFO][5164] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.197/26] handle="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.236 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:32.282714 containerd[1744]: 2025-01-15 12:51:32.236 [INFO][5164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.197/26] IPv6=[] ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" HandleID="k8s-pod-network.19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:32.283743 containerd[1744]: 2025-01-15 12:51:32.241 [INFO][5137] cni-plugin/k8s.go 386: Populated endpoint ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0", GenerateName:"calico-kube-controllers-697f49564b-", Namespace:"calico-system", SelfLink:"", UID:"e669e4c4-4197-45bf-985d-bac7d7c912eb", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"697f49564b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"", Pod:"calico-kube-controllers-697f49564b-jvb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68fcbed395e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:32.283743 containerd[1744]: 2025-01-15 12:51:32.242 [INFO][5137] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.197/32] ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:32.283743 containerd[1744]: 2025-01-15 12:51:32.242 [INFO][5137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68fcbed395e ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:32.283743 containerd[1744]: 2025-01-15 12:51:32.250 [INFO][5137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:32.283743 containerd[1744]: 2025-01-15 12:51:32.251 [INFO][5137] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0", GenerateName:"calico-kube-controllers-697f49564b-", Namespace:"calico-system", SelfLink:"", UID:"e669e4c4-4197-45bf-985d-bac7d7c912eb", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"697f49564b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67", Pod:"calico-kube-controllers-697f49564b-jvb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68fcbed395e", MAC:"26:c3:98:6f:74:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:32.283743 containerd[1744]: 2025-01-15 12:51:32.279 [INFO][5137] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67" Namespace="calico-system" Pod="calico-kube-controllers-697f49564b-jvb87" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:32.309896 systemd-networkd[1609]: cali90eae49d204: Link UP Jan 15 12:51:32.311695 systemd-networkd[1609]: cali90eae49d204: Gained carrier Jan 15 12:51:32.326106 containerd[1744]: time="2025-01-15T12:51:32.325581950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:32.326106 containerd[1744]: time="2025-01-15T12:51:32.326031991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:32.326106 containerd[1744]: time="2025-01-15T12:51:32.326048511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:32.326622 containerd[1744]: time="2025-01-15T12:51:32.326480351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.133 [INFO][5149] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0 coredns-76f75df574- kube-system 22ef187a-1055-4e64-99a2-f81f790e5b7a 780 0 2025-01-15 12:50:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b8bd16053a coredns-76f75df574-hpmct eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90eae49d204 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.133 [INFO][5149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.208 [INFO][5182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" HandleID="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.223 [INFO][5182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" HandleID="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030cb50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b8bd16053a", "pod":"coredns-76f75df574-hpmct", "timestamp":"2025-01-15 12:51:32.207773599 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b8bd16053a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.224 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.237 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.237 [INFO][5182] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b8bd16053a' Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.240 [INFO][5182] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.251 [INFO][5182] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.259 [INFO][5182] ipam/ipam.go 489: Trying affinity for 192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.268 [INFO][5182] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.277 [INFO][5182] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.277 [INFO][5182] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.280 [INFO][5182] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660 Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.289 [INFO][5182] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.300 [INFO][5182] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.198/26] block=192.168.3.192/26 handle="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.300 [INFO][5182] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.198/26] handle="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" host="ci-4081.3.0-a-b8bd16053a" Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.300 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:32.341967 containerd[1744]: 2025-01-15 12:51:32.300 [INFO][5182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.198/26] IPv6=[] ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" HandleID="k8s-pod-network.28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:32.342742 containerd[1744]: 2025-01-15 12:51:32.303 [INFO][5149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22ef187a-1055-4e64-99a2-f81f790e5b7a", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"", Pod:"coredns-76f75df574-hpmct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90eae49d204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:32.342742 containerd[1744]: 2025-01-15 12:51:32.304 [INFO][5149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.198/32] ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:32.342742 containerd[1744]: 2025-01-15 12:51:32.304 [INFO][5149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90eae49d204 ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:32.342742 containerd[1744]: 2025-01-15 12:51:32.312 [INFO][5149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:32.342742 containerd[1744]: 2025-01-15 12:51:32.315 [INFO][5149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22ef187a-1055-4e64-99a2-f81f790e5b7a", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660", Pod:"coredns-76f75df574-hpmct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90eae49d204", MAC:"2e:8a:57:2d:50:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:32.342742 containerd[1744]: 2025-01-15 12:51:32.337 [INFO][5149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660" Namespace="kube-system" Pod="coredns-76f75df574-hpmct" WorkloadEndpoint="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:32.351270 systemd[1]: Started cri-containerd-19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67.scope - libcontainer container 19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67. Jan 15 12:51:32.383460 containerd[1744]: time="2025-01-15T12:51:32.383373124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:32.383677 containerd[1744]: time="2025-01-15T12:51:32.383652364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:32.383774 containerd[1744]: time="2025-01-15T12:51:32.383750124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:32.384548 containerd[1744]: time="2025-01-15T12:51:32.384464885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:32.407594 systemd[1]: Started cri-containerd-28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660.scope - libcontainer container 28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660. Jan 15 12:51:32.411455 containerd[1744]: time="2025-01-15T12:51:32.411276969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-697f49564b-jvb87,Uid:e669e4c4-4197-45bf-985d-bac7d7c912eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67\"" Jan 15 12:51:32.445248 containerd[1744]: time="2025-01-15T12:51:32.445144504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hpmct,Uid:22ef187a-1055-4e64-99a2-f81f790e5b7a,Namespace:kube-system,Attempt:1,} returns sandbox id \"28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660\"" Jan 15 12:51:32.450108 containerd[1744]: time="2025-01-15T12:51:32.449878591Z" level=info msg="CreateContainer within sandbox \"28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:51:32.490454 containerd[1744]: time="2025-01-15T12:51:32.490399737Z" level=info msg="CreateContainer within sandbox \"28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71992f68ae2ca038ea527c48e1aaf3be2e37186a3c0b19c813289764c8637d91\"" Jan 15 12:51:32.492253 containerd[1744]: time="2025-01-15T12:51:32.491319499Z" level=info msg="StartContainer for \"71992f68ae2ca038ea527c48e1aaf3be2e37186a3c0b19c813289764c8637d91\"" Jan 15 12:51:32.516404 systemd[1]: Started cri-containerd-71992f68ae2ca038ea527c48e1aaf3be2e37186a3c0b19c813289764c8637d91.scope - libcontainer container 71992f68ae2ca038ea527c48e1aaf3be2e37186a3c0b19c813289764c8637d91. Jan 15 12:51:32.546811 containerd[1744]: time="2025-01-15T12:51:32.546703028Z" level=info msg="StartContainer for \"71992f68ae2ca038ea527c48e1aaf3be2e37186a3c0b19c813289764c8637d91\" returns successfully" Jan 15 12:51:32.619105 systemd-networkd[1609]: calia9a6c4915bf: Gained IPv6LL Jan 15 12:51:33.019147 kubelet[3294]: I0115 12:51:33.019100 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hpmct" podStartSLOduration=37.019059834 podStartE2EDuration="37.019059834s" podCreationTimestamp="2025-01-15 12:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:33.018801634 +0000 UTC m=+51.408372194" watchObservedRunningTime="2025-01-15 12:51:33.019059834 +0000 UTC m=+51.408630394" Jan 15 12:51:33.067190 systemd-networkd[1609]: cali437d500f9ab: Gained IPv6LL Jan 15 12:51:33.835292 systemd-networkd[1609]: cali68fcbed395e: Gained IPv6LL Jan 15 12:51:34.283245 systemd-networkd[1609]: cali90eae49d204: Gained IPv6LL Jan 15 12:51:35.120093 containerd[1744]: time="2025-01-15T12:51:35.120039040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:35.125091 containerd[1744]: time="2025-01-15T12:51:35.125035768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 15 12:51:35.129933 containerd[1744]: time="2025-01-15T12:51:35.129849936Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:35.138098 containerd[1744]: time="2025-01-15T12:51:35.137087707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:35.140304 containerd[1744]: time="2025-01-15T12:51:35.140248712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.110009441s" Jan 15 12:51:35.140780 containerd[1744]: time="2025-01-15T12:51:35.140753473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 15 12:51:35.142286 containerd[1744]: time="2025-01-15T12:51:35.142037635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 12:51:35.146241 containerd[1744]: time="2025-01-15T12:51:35.146202042Z" level=info msg="CreateContainer within sandbox \"31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:51:35.195723 containerd[1744]: time="2025-01-15T12:51:35.195677522Z" level=info msg="CreateContainer within sandbox \"31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7eead5e76d1632c243841743895144589a120899c9fe967a6618a0f45dafc91d\"" Jan 15 12:51:35.197926 containerd[1744]: time="2025-01-15T12:51:35.196858684Z" level=info msg="StartContainer for \"7eead5e76d1632c243841743895144589a120899c9fe967a6618a0f45dafc91d\"" Jan 15 12:51:35.232275 systemd[1]: Started cri-containerd-7eead5e76d1632c243841743895144589a120899c9fe967a6618a0f45dafc91d.scope - libcontainer container 7eead5e76d1632c243841743895144589a120899c9fe967a6618a0f45dafc91d. Jan 15 12:51:35.271635 containerd[1744]: time="2025-01-15T12:51:35.271182045Z" level=info msg="StartContainer for \"7eead5e76d1632c243841743895144589a120899c9fe967a6618a0f45dafc91d\" returns successfully" Jan 15 12:51:35.456832 containerd[1744]: time="2025-01-15T12:51:35.456070252Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:35.460020 containerd[1744]: time="2025-01-15T12:51:35.459970299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 15 12:51:35.465925 containerd[1744]: time="2025-01-15T12:51:35.464666069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 320.85611ms" Jan 15 12:51:35.465925 containerd[1744]: time="2025-01-15T12:51:35.464722509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 15 12:51:35.468086 containerd[1744]: time="2025-01-15T12:51:35.467843435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 15 12:51:35.473548 containerd[1744]: time="2025-01-15T12:51:35.473265045Z" level=info msg="CreateContainer within sandbox \"abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:51:35.513164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409008961.mount: Deactivated successfully. Jan 15 12:51:35.521863 containerd[1744]: time="2025-01-15T12:51:35.521721380Z" level=info msg="CreateContainer within sandbox \"abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e6c29c1c5c69113c0ea579c31e2a25c811839f57ffad49e0857aef055d9ce847\"" Jan 15 12:51:35.523704 containerd[1744]: time="2025-01-15T12:51:35.522551422Z" level=info msg="StartContainer for \"e6c29c1c5c69113c0ea579c31e2a25c811839f57ffad49e0857aef055d9ce847\"" Jan 15 12:51:35.557325 systemd[1]: Started cri-containerd-e6c29c1c5c69113c0ea579c31e2a25c811839f57ffad49e0857aef055d9ce847.scope - libcontainer container e6c29c1c5c69113c0ea579c31e2a25c811839f57ffad49e0857aef055d9ce847. Jan 15 12:51:35.608222 containerd[1744]: time="2025-01-15T12:51:35.608174630Z" level=info msg="StartContainer for \"e6c29c1c5c69113c0ea579c31e2a25c811839f57ffad49e0857aef055d9ce847\" returns successfully" Jan 15 12:51:36.040061 kubelet[3294]: I0115 12:51:36.040024 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-87bn5" podStartSLOduration=28.21803082 podStartE2EDuration="33.039961396s" podCreationTimestamp="2025-01-15 12:51:03 +0000 UTC" firstStartedPulling="2025-01-15 12:51:30.319256538 +0000 UTC m=+48.708827098" lastFinishedPulling="2025-01-15 12:51:35.141187114 +0000 UTC m=+53.530757674" observedRunningTime="2025-01-15 12:51:36.039742955 +0000 UTC m=+54.429313515" watchObservedRunningTime="2025-01-15 12:51:36.039961396 +0000 UTC m=+54.429532036" Jan 15 12:51:37.030538 kubelet[3294]: I0115 12:51:37.030282 3294 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:37.169057 containerd[1744]: time="2025-01-15T12:51:37.168956207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:37.173290 containerd[1744]: time="2025-01-15T12:51:37.173218616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 15 12:51:37.177777 containerd[1744]: time="2025-01-15T12:51:37.177702545Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:37.185944 containerd[1744]: time="2025-01-15T12:51:37.185290679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:37.186279 containerd[1744]: time="2025-01-15T12:51:37.186244601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.718353406s" Jan 15 12:51:37.186378 containerd[1744]: time="2025-01-15T12:51:37.186359082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 15 12:51:37.187240 containerd[1744]: time="2025-01-15T12:51:37.187190283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 15 12:51:37.191530 containerd[1744]: time="2025-01-15T12:51:37.191502532Z" level=info msg="CreateContainer within sandbox \"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 15 12:51:37.233394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652122338.mount: Deactivated successfully. Jan 15 12:51:37.246826 containerd[1744]: time="2025-01-15T12:51:37.246414439Z" level=info msg="CreateContainer within sandbox \"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"da2de8c87769ef9e870a4ead6e997201dd987463368df42c535c3879b4488bf6\"" Jan 15 12:51:37.250165 containerd[1744]: time="2025-01-15T12:51:37.247360361Z" level=info msg="StartContainer for \"da2de8c87769ef9e870a4ead6e997201dd987463368df42c535c3879b4488bf6\"" Jan 15 12:51:37.265728 kubelet[3294]: I0115 12:51:37.265520 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64bc7cdd6d-9bxp5" podStartSLOduration=30.031864264 podStartE2EDuration="34.265472836s" podCreationTimestamp="2025-01-15 12:51:03 +0000 UTC" firstStartedPulling="2025-01-15 12:51:31.231519097 +0000 UTC m=+49.621089657" lastFinishedPulling="2025-01-15 12:51:35.465127709 +0000 UTC m=+53.854698229" observedRunningTime="2025-01-15 12:51:36.058926913 +0000 UTC m=+54.448497473" watchObservedRunningTime="2025-01-15 12:51:37.265472836 +0000 UTC m=+55.655043396" Jan 15 12:51:37.299228 systemd[1]: Started cri-containerd-da2de8c87769ef9e870a4ead6e997201dd987463368df42c535c3879b4488bf6.scope - libcontainer container da2de8c87769ef9e870a4ead6e997201dd987463368df42c535c3879b4488bf6. Jan 15 12:51:37.334707 containerd[1744]: time="2025-01-15T12:51:37.334658012Z" level=info msg="StartContainer for \"da2de8c87769ef9e870a4ead6e997201dd987463368df42c535c3879b4488bf6\" returns successfully" Jan 15 12:51:37.855906 kubelet[3294]: I0115 12:51:37.855867 3294 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 15 12:51:37.855906 kubelet[3294]: I0115 12:51:37.855913 3294 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 15 12:51:39.970881 containerd[1744]: time="2025-01-15T12:51:39.970092014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:39.972815 containerd[1744]: time="2025-01-15T12:51:39.972774019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 15 12:51:39.978862 containerd[1744]: time="2025-01-15T12:51:39.977471068Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:39.984246 containerd[1744]: time="2025-01-15T12:51:39.984174081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:39.985762 containerd[1744]: time="2025-01-15T12:51:39.985439683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.79810644s" Jan 15 12:51:39.985762 containerd[1744]: time="2025-01-15T12:51:39.985545843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 15 12:51:40.003797 containerd[1744]: time="2025-01-15T12:51:40.003755038Z" level=info msg="CreateContainer within sandbox \"19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 15 12:51:40.051623 containerd[1744]: time="2025-01-15T12:51:40.051487569Z" level=info msg="CreateContainer within sandbox \"19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dcbd661883959189220749509cf7ae52fc2bcd9960970018e6d3a14bee0ab2d8\"" Jan 15 12:51:40.053204 containerd[1744]: time="2025-01-15T12:51:40.052112650Z" level=info msg="StartContainer for \"dcbd661883959189220749509cf7ae52fc2bcd9960970018e6d3a14bee0ab2d8\"" Jan 15 12:51:40.083187 systemd[1]: Started cri-containerd-dcbd661883959189220749509cf7ae52fc2bcd9960970018e6d3a14bee0ab2d8.scope - libcontainer container dcbd661883959189220749509cf7ae52fc2bcd9960970018e6d3a14bee0ab2d8. Jan 15 12:51:40.127452 containerd[1744]: time="2025-01-15T12:51:40.127412234Z" level=info msg="StartContainer for \"dcbd661883959189220749509cf7ae52fc2bcd9960970018e6d3a14bee0ab2d8\" returns successfully" Jan 15 12:51:41.063098 kubelet[3294]: I0115 12:51:41.062352 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-qbhfp" podStartSLOduration=30.137206343 podStartE2EDuration="37.06230818s" podCreationTimestamp="2025-01-15 12:51:04 +0000 UTC" firstStartedPulling="2025-01-15 12:51:30.261732125 +0000 UTC m=+48.651302685" lastFinishedPulling="2025-01-15 12:51:37.186833962 +0000 UTC m=+55.576404522" observedRunningTime="2025-01-15 12:51:38.053524024 +0000 UTC m=+56.443094584" watchObservedRunningTime="2025-01-15 12:51:41.06230818 +0000 UTC m=+59.451878740" Jan 15 12:51:41.063098 kubelet[3294]: I0115 12:51:41.062534 3294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-697f49564b-jvb87" podStartSLOduration=29.49123159 podStartE2EDuration="37.06251794s" podCreationTimestamp="2025-01-15 12:51:04 +0000 UTC" firstStartedPulling="2025-01-15 12:51:32.414645774 +0000 UTC m=+50.804216334" lastFinishedPulling="2025-01-15 12:51:39.985932124 +0000 UTC m=+58.375502684" observedRunningTime="2025-01-15 12:51:41.061386898 +0000 UTC m=+59.450957458" watchObservedRunningTime="2025-01-15 12:51:41.06251794 +0000 UTC m=+59.452088500" Jan 15 12:51:41.708333 containerd[1744]: time="2025-01-15T12:51:41.708292614Z" level=info msg="StopPodSandbox for \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\"" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.752 [WARNING][5573] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a", Pod:"calico-apiserver-64bc7cdd6d-9bxp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9a6c4915bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.752 [INFO][5573] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.752 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" iface="eth0" netns="" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.752 [INFO][5573] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.752 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.771 [INFO][5581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.771 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.771 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.779 [WARNING][5581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.779 [INFO][5581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.781 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:41.783972 containerd[1744]: 2025-01-15 12:51:41.782 [INFO][5573] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.784504 containerd[1744]: time="2025-01-15T12:51:41.784011998Z" level=info msg="TearDown network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\" successfully" Jan 15 12:51:41.784504 containerd[1744]: time="2025-01-15T12:51:41.784041398Z" level=info msg="StopPodSandbox for \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\" returns successfully" Jan 15 12:51:41.785199 containerd[1744]: time="2025-01-15T12:51:41.784908120Z" level=info msg="RemovePodSandbox for \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\"" Jan 15 12:51:41.785199 containerd[1744]: time="2025-01-15T12:51:41.784946400Z" level=info msg="Forcibly stopping sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\"" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.825 [WARNING][5600] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b898b8c3-71eb-4b4c-8c8f-7b3ba0996ace", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"abce667655607f5ec67a0d101968fc8ed69aa1c62fd87026a5c11594d3fa610a", Pod:"calico-apiserver-64bc7cdd6d-9bxp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9a6c4915bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.825 [INFO][5600] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.825 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" iface="eth0" netns="" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.826 [INFO][5600] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.826 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.845 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.845 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.845 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.855 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.855 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" HandleID="k8s-pod-network.8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--9bxp5-eth0" Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.856 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:41.859781 containerd[1744]: 2025-01-15 12:51:41.858 [INFO][5600] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599" Jan 15 12:51:41.860234 containerd[1744]: time="2025-01-15T12:51:41.859824463Z" level=info msg="TearDown network for sandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\" successfully" Jan 15 12:51:41.885721 containerd[1744]: time="2025-01-15T12:51:41.885671352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:41.885879 containerd[1744]: time="2025-01-15T12:51:41.885750232Z" level=info msg="RemovePodSandbox \"8e79bae19e9326b6c315e8dfd329c17f1a1f0c9adec1daa292c9d66a76729599\" returns successfully" Jan 15 12:51:41.886347 containerd[1744]: time="2025-01-15T12:51:41.886325354Z" level=info msg="StopPodSandbox for \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\"" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.926 [WARNING][5625] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b", Pod:"csi-node-driver-qbhfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a24b6a8301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.926 [INFO][5625] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.926 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" iface="eth0" netns="" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.926 [INFO][5625] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.926 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.944 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.944 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.944 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.954 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.954 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.955 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:41.959235 containerd[1744]: 2025-01-15 12:51:41.957 [INFO][5625] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:41.959235 containerd[1744]: time="2025-01-15T12:51:41.959125213Z" level=info msg="TearDown network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\" successfully" Jan 15 12:51:41.959235 containerd[1744]: time="2025-01-15T12:51:41.959150373Z" level=info msg="StopPodSandbox for \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\" returns successfully" Jan 15 12:51:41.961337 containerd[1744]: time="2025-01-15T12:51:41.960216135Z" level=info msg="RemovePodSandbox for \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\"" Jan 15 12:51:41.961337 containerd[1744]: time="2025-01-15T12:51:41.960253215Z" level=info msg="Forcibly stopping sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\"" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.008 [WARNING][5651] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ec3a0a6-7f52-426d-bd6b-d0c0bd61ff03", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"26d8ece166ac665821f23b7290b5ff64763dcde34368f87f58798fcd22a5301b", Pod:"csi-node-driver-qbhfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a24b6a8301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.008 [INFO][5651] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.008 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" iface="eth0" netns="" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.008 [INFO][5651] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.008 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.027 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.027 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.027 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.036 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.036 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" HandleID="k8s-pod-network.a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-csi--node--driver--qbhfp-eth0" Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.037 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.040659 containerd[1744]: 2025-01-15 12:51:42.039 [INFO][5651] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c" Jan 15 12:51:42.041627 containerd[1744]: time="2025-01-15T12:51:42.041181849Z" level=info msg="TearDown network for sandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\" successfully" Jan 15 12:51:42.049581 containerd[1744]: time="2025-01-15T12:51:42.049528905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:42.049673 containerd[1744]: time="2025-01-15T12:51:42.049593585Z" level=info msg="RemovePodSandbox \"a022b0b9e052c42fefe183f697a9b480df8290a294495535ce4c17fa75c2426c\" returns successfully" Jan 15 12:51:42.050221 containerd[1744]: time="2025-01-15T12:51:42.049890906Z" level=info msg="StopPodSandbox for \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\"" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.092 [WARNING][5676] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd1c71d7-7e6f-4176-b57a-00a3f42a566e", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd", Pod:"calico-apiserver-64bc7cdd6d-87bn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32203b1025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.092 [INFO][5676] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.092 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" iface="eth0" netns="" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.092 [INFO][5676] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.092 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.114 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.114 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.114 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.122 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.122 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.124 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.127410 containerd[1744]: 2025-01-15 12:51:42.125 [INFO][5676] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.127821 containerd[1744]: time="2025-01-15T12:51:42.127463334Z" level=info msg="TearDown network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\" successfully" Jan 15 12:51:42.127821 containerd[1744]: time="2025-01-15T12:51:42.127488494Z" level=info msg="StopPodSandbox for \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\" returns successfully" Jan 15 12:51:42.127971 containerd[1744]: time="2025-01-15T12:51:42.127939695Z" level=info msg="RemovePodSandbox for \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\"" Jan 15 12:51:42.128028 containerd[1744]: time="2025-01-15T12:51:42.127976655Z" level=info msg="Forcibly stopping sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\"" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.166 [WARNING][5700] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0", GenerateName:"calico-apiserver-64bc7cdd6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd1c71d7-7e6f-4176-b57a-00a3f42a566e", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bc7cdd6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"31fad413d8c6461531c619cd1445a537ee053cb90314f663cc99a44e886a6ffd", Pod:"calico-apiserver-64bc7cdd6d-87bn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic32203b1025", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.166 [INFO][5700] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.166 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" iface="eth0" netns="" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.167 [INFO][5700] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.167 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.187 [INFO][5706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.187 [INFO][5706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.187 [INFO][5706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.195 [WARNING][5706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.195 [INFO][5706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" HandleID="k8s-pod-network.1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--apiserver--64bc7cdd6d--87bn5-eth0" Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.197 [INFO][5706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.199756 containerd[1744]: 2025-01-15 12:51:42.198 [INFO][5700] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c" Jan 15 12:51:42.200329 containerd[1744]: time="2025-01-15T12:51:42.199797072Z" level=info msg="TearDown network for sandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\" successfully" Jan 15 12:51:42.208671 containerd[1744]: time="2025-01-15T12:51:42.208605169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:42.208824 containerd[1744]: time="2025-01-15T12:51:42.208699729Z" level=info msg="RemovePodSandbox \"1f15409854f50c69a92ac36cc504ee57e3562f411a559ee1ea15726126776b5c\" returns successfully" Jan 15 12:51:42.209590 containerd[1744]: time="2025-01-15T12:51:42.209325170Z" level=info msg="StopPodSandbox for \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\"" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.249 [WARNING][5724] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22ef187a-1055-4e64-99a2-f81f790e5b7a", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660", Pod:"coredns-76f75df574-hpmct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90eae49d204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.250 [INFO][5724] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.250 [INFO][5724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" iface="eth0" netns="" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.250 [INFO][5724] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.250 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.271 [INFO][5730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.271 [INFO][5730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.271 [INFO][5730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.279 [WARNING][5730] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.280 [INFO][5730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.281 [INFO][5730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.285104 containerd[1744]: 2025-01-15 12:51:42.283 [INFO][5724] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.285755 containerd[1744]: time="2025-01-15T12:51:42.285369956Z" level=info msg="TearDown network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\" successfully" Jan 15 12:51:42.285755 containerd[1744]: time="2025-01-15T12:51:42.285408116Z" level=info msg="StopPodSandbox for \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\" returns successfully" Jan 15 12:51:42.286294 containerd[1744]: time="2025-01-15T12:51:42.286252077Z" level=info msg="RemovePodSandbox for \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\"" Jan 15 12:51:42.286294 containerd[1744]: time="2025-01-15T12:51:42.286289757Z" level=info msg="Forcibly stopping sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\"" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.324 [WARNING][5748] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22ef187a-1055-4e64-99a2-f81f790e5b7a", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"28436ff678ce38c719aeb724acc30d4174157b8c4126f0d79c235e3b3a102660", Pod:"coredns-76f75df574-hpmct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90eae49d204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.324 [INFO][5748] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.324 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" iface="eth0" netns="" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.324 [INFO][5748] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.324 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.344 [INFO][5754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.344 [INFO][5754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.344 [INFO][5754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.355 [WARNING][5754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.355 [INFO][5754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" HandleID="k8s-pod-network.b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--hpmct-eth0" Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.357 [INFO][5754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.360104 containerd[1744]: 2025-01-15 12:51:42.358 [INFO][5748] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84" Jan 15 12:51:42.361062 containerd[1744]: time="2025-01-15T12:51:42.360568539Z" level=info msg="TearDown network for sandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\" successfully" Jan 15 12:51:42.371280 containerd[1744]: time="2025-01-15T12:51:42.371107879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:42.371280 containerd[1744]: time="2025-01-15T12:51:42.371178160Z" level=info msg="RemovePodSandbox \"b9c59c54e91d7190e71e2ea0389dc6ecd8adeab4af0b2979969029c167fd7c84\" returns successfully" Jan 15 12:51:42.371701 containerd[1744]: time="2025-01-15T12:51:42.371666081Z" level=info msg="StopPodSandbox for \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\"" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.411 [WARNING][5772] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0", GenerateName:"calico-kube-controllers-697f49564b-", Namespace:"calico-system", SelfLink:"", UID:"e669e4c4-4197-45bf-985d-bac7d7c912eb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"697f49564b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67", Pod:"calico-kube-controllers-697f49564b-jvb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68fcbed395e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.411 [INFO][5772] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.411 [INFO][5772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" iface="eth0" netns="" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.411 [INFO][5772] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.411 [INFO][5772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.431 [INFO][5780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.431 [INFO][5780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.431 [INFO][5780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.441 [WARNING][5780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.441 [INFO][5780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.443 [INFO][5780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.446093 containerd[1744]: 2025-01-15 12:51:42.444 [INFO][5772] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.446594 containerd[1744]: time="2025-01-15T12:51:42.446132343Z" level=info msg="TearDown network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\" successfully" Jan 15 12:51:42.446594 containerd[1744]: time="2025-01-15T12:51:42.446158303Z" level=info msg="StopPodSandbox for \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\" returns successfully" Jan 15 12:51:42.447225 containerd[1744]: time="2025-01-15T12:51:42.446854384Z" level=info msg="RemovePodSandbox for \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\"" Jan 15 12:51:42.447225 containerd[1744]: time="2025-01-15T12:51:42.446915944Z" level=info msg="Forcibly stopping sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\"" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.485 [WARNING][5799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0", GenerateName:"calico-kube-controllers-697f49564b-", Namespace:"calico-system", SelfLink:"", UID:"e669e4c4-4197-45bf-985d-bac7d7c912eb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"697f49564b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"19f0c40397366b5a3683af87dd7e9847077d525238b47e974ce3ab091416ee67", Pod:"calico-kube-controllers-697f49564b-jvb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68fcbed395e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.485 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.485 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" iface="eth0" netns="" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.485 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.485 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.507 [INFO][5806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.507 [INFO][5806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.507 [INFO][5806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.515 [WARNING][5806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.515 [INFO][5806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" HandleID="k8s-pod-network.02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Workload="ci--4081.3.0--a--b8bd16053a-k8s-calico--kube--controllers--697f49564b--jvb87-eth0" Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.517 [INFO][5806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.520125 containerd[1744]: 2025-01-15 12:51:42.518 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900" Jan 15 12:51:42.521383 containerd[1744]: time="2025-01-15T12:51:42.520109604Z" level=info msg="TearDown network for sandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\" successfully" Jan 15 12:51:42.531964 containerd[1744]: time="2025-01-15T12:51:42.531914067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:42.532140 containerd[1744]: time="2025-01-15T12:51:42.532003747Z" level=info msg="RemovePodSandbox \"02dd325ac1bcd6c64ef7d485637ee0d8cf90bdf8300ae7b303dc8beba7f1d900\" returns successfully" Jan 15 12:51:42.532879 containerd[1744]: time="2025-01-15T12:51:42.532563628Z" level=info msg="StopPodSandbox for \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\"" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.575 [WARNING][5824] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018", Pod:"coredns-76f75df574-psqf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437d500f9ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.576 [INFO][5824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.576 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" iface="eth0" netns="" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.576 [INFO][5824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.576 [INFO][5824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.600 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.600 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.600 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.608 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.608 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.610 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.613169 containerd[1744]: 2025-01-15 12:51:42.611 [INFO][5824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.613946 containerd[1744]: time="2025-01-15T12:51:42.613669543Z" level=info msg="TearDown network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\" successfully" Jan 15 12:51:42.613946 containerd[1744]: time="2025-01-15T12:51:42.613722103Z" level=info msg="StopPodSandbox for \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\" returns successfully" Jan 15 12:51:42.614627 containerd[1744]: time="2025-01-15T12:51:42.614595104Z" level=info msg="RemovePodSandbox for \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\"" Jan 15 12:51:42.614708 containerd[1744]: time="2025-01-15T12:51:42.614631505Z" level=info msg="Forcibly stopping sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\"" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.653 [WARNING][5849] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5aee6d16-2980-4221-9d0f-dbd0eaab2ab3", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 50, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b8bd16053a", ContainerID:"fd4b25f15d59e28510b490e76a607ba4634cf51d5b3ffd9a2b3c3bb729be3018", Pod:"coredns-76f75df574-psqf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali437d500f9ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.654 [INFO][5849] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.654 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" iface="eth0" netns="" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.654 [INFO][5849] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.654 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.674 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.675 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.675 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.686 [WARNING][5855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.686 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" HandleID="k8s-pod-network.7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Workload="ci--4081.3.0--a--b8bd16053a-k8s-coredns--76f75df574--psqf5-eth0" Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.688 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:42.691047 containerd[1744]: 2025-01-15 12:51:42.689 [INFO][5849] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1" Jan 15 12:51:42.691487 containerd[1744]: time="2025-01-15T12:51:42.691093131Z" level=info msg="TearDown network for sandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\" successfully" Jan 15 12:51:43.258523 containerd[1744]: time="2025-01-15T12:51:43.258399974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:51:43.258930 containerd[1744]: time="2025-01-15T12:51:43.258554694Z" level=info msg="RemovePodSandbox \"7d445184b31b42514b20840195c25d42dd347746bcb98a6b466f72c5e49c27e1\" returns successfully" Jan 15 12:52:14.818809 kubelet[3294]: I0115 12:52:14.818632 3294 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:52:22.800125 systemd[1]: run-containerd-runc-k8s.io-c722c03e43c4ce36de98062da3ebf5086370689fe6b217c6f12327bb35144c37-runc.6mVPPJ.mount: Deactivated successfully. Jan 15 12:52:52.802665 systemd[1]: run-containerd-runc-k8s.io-c722c03e43c4ce36de98062da3ebf5086370689fe6b217c6f12327bb35144c37-runc.93W3jX.mount: Deactivated successfully. Jan 15 12:53:37.376284 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:53806.service - OpenSSH per-connection server daemon (10.200.16.10:53806). Jan 15 12:53:37.822200 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 53806 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:37.826518 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:37.832401 systemd-logind[1693]: New session 10 of user core. Jan 15 12:53:37.836511 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 12:53:38.250515 sshd[6125]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:38.254669 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:53806.service: Deactivated successfully. Jan 15 12:53:38.257567 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 12:53:38.258652 systemd-logind[1693]: Session 10 logged out. Waiting for processes to exit. Jan 15 12:53:38.259592 systemd-logind[1693]: Removed session 10. Jan 15 12:53:43.330422 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:53814.service - OpenSSH per-connection server daemon (10.200.16.10:53814). Jan 15 12:53:43.780362 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 53814 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:43.781860 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:43.785746 systemd-logind[1693]: New session 11 of user core. Jan 15 12:53:43.796200 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 12:53:44.176389 sshd[6143]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:44.179879 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:53814.service: Deactivated successfully. Jan 15 12:53:44.181815 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 12:53:44.182863 systemd-logind[1693]: Session 11 logged out. Waiting for processes to exit. Jan 15 12:53:44.183766 systemd-logind[1693]: Removed session 11. Jan 15 12:53:49.267349 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:40452.service - OpenSSH per-connection server daemon (10.200.16.10:40452). Jan 15 12:53:49.716282 sshd[6176]: Accepted publickey for core from 10.200.16.10 port 40452 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:49.717832 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:49.722204 systemd-logind[1693]: New session 12 of user core. Jan 15 12:53:49.727140 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 12:53:50.113864 sshd[6176]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:50.118859 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:40452.service: Deactivated successfully. Jan 15 12:53:50.120784 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 12:53:50.121618 systemd-logind[1693]: Session 12 logged out. Waiting for processes to exit. Jan 15 12:53:50.122868 systemd-logind[1693]: Removed session 12. Jan 15 12:53:50.205272 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:40462.service - OpenSSH per-connection server daemon (10.200.16.10:40462). Jan 15 12:53:50.680035 sshd[6190]: Accepted publickey for core from 10.200.16.10 port 40462 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:50.681456 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:50.686629 systemd-logind[1693]: New session 13 of user core. Jan 15 12:53:50.691165 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 12:53:50.860612 update_engine[1702]: I20250115 12:53:50.860555 1702 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 15 12:53:50.860612 update_engine[1702]: I20250115 12:53:50.860607 1702 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 15 12:53:50.861078 update_engine[1702]: I20250115 12:53:50.860809 1702 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 15 12:53:50.861338 update_engine[1702]: I20250115 12:53:50.861303 1702 omaha_request_params.cc:62] Current group set to lts Jan 15 12:53:50.861538 update_engine[1702]: I20250115 12:53:50.861404 1702 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 15 12:53:50.861538 update_engine[1702]: I20250115 12:53:50.861419 1702 update_attempter.cc:643] Scheduling an action processor start. Jan 15 12:53:50.861538 update_engine[1702]: I20250115 12:53:50.861438 1702 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 12:53:50.861801 locksmithd[1779]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 15 12:53:50.863245 update_engine[1702]: I20250115 12:53:50.862473 1702 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 15 12:53:50.863245 update_engine[1702]: I20250115 12:53:50.862560 1702 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 12:53:50.863245 update_engine[1702]: I20250115 12:53:50.862569 1702 omaha_request_action.cc:272] Request: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: Jan 15 12:53:50.863245 update_engine[1702]: I20250115 12:53:50.862576 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:53:50.864608 update_engine[1702]: I20250115 12:53:50.864571 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:53:50.865005 update_engine[1702]: I20250115 12:53:50.864964 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:53:50.877052 update_engine[1702]: E20250115 12:53:50.876966 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:53:50.877178 update_engine[1702]: I20250115 12:53:50.877099 1702 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 15 12:53:51.140592 sshd[6190]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:51.144638 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:40462.service: Deactivated successfully. Jan 15 12:53:51.146702 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 12:53:51.147493 systemd-logind[1693]: Session 13 logged out. Waiting for processes to exit. Jan 15 12:53:51.149120 systemd-logind[1693]: Removed session 13. Jan 15 12:53:51.230348 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:40474.service - OpenSSH per-connection server daemon (10.200.16.10:40474). Jan 15 12:53:51.674459 sshd[6201]: Accepted publickey for core from 10.200.16.10 port 40474 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:51.676082 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:51.680137 systemd-logind[1693]: New session 14 of user core. Jan 15 12:53:51.685137 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 12:53:52.066346 sshd[6201]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:52.069977 systemd-logind[1693]: Session 14 logged out. Waiting for processes to exit. Jan 15 12:53:52.070397 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:40474.service: Deactivated successfully. Jan 15 12:53:52.072653 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 12:53:52.074775 systemd-logind[1693]: Removed session 14. Jan 15 12:53:57.154245 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:40030.service - OpenSSH per-connection server daemon (10.200.16.10:40030). Jan 15 12:53:57.643590 sshd[6243]: Accepted publickey for core from 10.200.16.10 port 40030 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:57.645212 sshd[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:57.649973 systemd-logind[1693]: New session 15 of user core. Jan 15 12:53:57.657198 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 12:53:58.068262 sshd[6243]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:58.072058 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:40030.service: Deactivated successfully. Jan 15 12:53:58.075768 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 12:53:58.076875 systemd-logind[1693]: Session 15 logged out. Waiting for processes to exit. Jan 15 12:53:58.078176 systemd-logind[1693]: Removed session 15. Jan 15 12:54:00.861251 update_engine[1702]: I20250115 12:54:00.861163 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:54:00.861644 update_engine[1702]: I20250115 12:54:00.861466 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:54:00.861725 update_engine[1702]: I20250115 12:54:00.861688 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:54:00.908508 update_engine[1702]: E20250115 12:54:00.908442 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:54:00.908625 update_engine[1702]: I20250115 12:54:00.908532 1702 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 15 12:54:03.154354 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:40034.service - OpenSSH per-connection server daemon (10.200.16.10:40034). Jan 15 12:54:03.595542 sshd[6256]: Accepted publickey for core from 10.200.16.10 port 40034 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:03.597081 sshd[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:03.605413 systemd-logind[1693]: New session 16 of user core. Jan 15 12:54:03.607152 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 12:54:03.994911 sshd[6256]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:04.000327 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:40034.service: Deactivated successfully. Jan 15 12:54:04.003623 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 12:54:04.006328 systemd-logind[1693]: Session 16 logged out. Waiting for processes to exit. Jan 15 12:54:04.007791 systemd-logind[1693]: Removed session 16. Jan 15 12:54:09.082364 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:49498.service - OpenSSH per-connection server daemon (10.200.16.10:49498). Jan 15 12:54:09.522419 sshd[6298]: Accepted publickey for core from 10.200.16.10 port 49498 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:09.523850 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:09.528150 systemd-logind[1693]: New session 17 of user core. Jan 15 12:54:09.538209 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 12:54:09.916140 sshd[6298]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:09.918752 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:49498.service: Deactivated successfully. Jan 15 12:54:09.920651 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 12:54:09.922099 systemd-logind[1693]: Session 17 logged out. Waiting for processes to exit. Jan 15 12:54:09.923594 systemd-logind[1693]: Removed session 17. Jan 15 12:54:09.998177 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:49506.service - OpenSSH per-connection server daemon (10.200.16.10:49506). Jan 15 12:54:10.449166 sshd[6310]: Accepted publickey for core from 10.200.16.10 port 49506 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:10.450637 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:10.455416 systemd-logind[1693]: New session 18 of user core. Jan 15 12:54:10.465205 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 12:54:10.861042 update_engine[1702]: I20250115 12:54:10.860872 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:54:10.861344 update_engine[1702]: I20250115 12:54:10.861110 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:54:10.861344 update_engine[1702]: I20250115 12:54:10.861311 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:54:10.953645 update_engine[1702]: E20250115 12:54:10.953577 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:54:10.953780 update_engine[1702]: I20250115 12:54:10.953670 1702 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 15 12:54:10.954560 sshd[6310]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:10.957430 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:49506.service: Deactivated successfully. Jan 15 12:54:10.959938 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 12:54:10.961503 systemd-logind[1693]: Session 18 logged out. Waiting for processes to exit. Jan 15 12:54:10.962748 systemd-logind[1693]: Removed session 18. Jan 15 12:54:11.041164 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:49522.service - OpenSSH per-connection server daemon (10.200.16.10:49522). Jan 15 12:54:11.521535 sshd[6321]: Accepted publickey for core from 10.200.16.10 port 49522 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:11.523051 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:11.528108 systemd-logind[1693]: New session 19 of user core. Jan 15 12:54:11.535158 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 12:54:13.563955 sshd[6321]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:13.566913 systemd-logind[1693]: Session 19 logged out. Waiting for processes to exit. Jan 15 12:54:13.567132 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:49522.service: Deactivated successfully. Jan 15 12:54:13.569413 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 12:54:13.573086 systemd-logind[1693]: Removed session 19. Jan 15 12:54:13.645875 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:49524.service - OpenSSH per-connection server daemon (10.200.16.10:49524). Jan 15 12:54:14.102723 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 49524 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:14.104413 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:14.108414 systemd-logind[1693]: New session 20 of user core. Jan 15 12:54:14.116226 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 12:54:14.609322 sshd[6346]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:14.612869 systemd-logind[1693]: Session 20 logged out. Waiting for processes to exit. Jan 15 12:54:14.613555 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:49524.service: Deactivated successfully. Jan 15 12:54:14.615858 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 12:54:14.617248 systemd-logind[1693]: Removed session 20. Jan 15 12:54:14.699251 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:49538.service - OpenSSH per-connection server daemon (10.200.16.10:49538). Jan 15 12:54:15.173864 sshd[6356]: Accepted publickey for core from 10.200.16.10 port 49538 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:15.175469 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:15.179295 systemd-logind[1693]: New session 21 of user core. Jan 15 12:54:15.187250 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 12:54:15.583236 sshd[6356]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:15.587095 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:49538.service: Deactivated successfully. Jan 15 12:54:15.588862 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 12:54:15.589962 systemd-logind[1693]: Session 21 logged out. Waiting for processes to exit. Jan 15 12:54:15.591285 systemd-logind[1693]: Removed session 21. Jan 15 12:54:20.676254 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:58690.service - OpenSSH per-connection server daemon (10.200.16.10:58690). Jan 15 12:54:20.864451 update_engine[1702]: I20250115 12:54:20.864368 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:54:20.865275 update_engine[1702]: I20250115 12:54:20.864967 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:54:20.865275 update_engine[1702]: I20250115 12:54:20.865234 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:54:20.969817 update_engine[1702]: E20250115 12:54:20.969275 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969357 1702 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969367 1702 omaha_request_action.cc:617] Omaha request response: Jan 15 12:54:20.969817 update_engine[1702]: E20250115 12:54:20.969457 1702 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969475 1702 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969480 1702 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969484 1702 update_attempter.cc:306] Processing Done. Jan 15 12:54:20.969817 update_engine[1702]: E20250115 12:54:20.969496 1702 update_attempter.cc:619] Update failed. Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969502 1702 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969506 1702 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969512 1702 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969590 1702 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969614 1702 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 12:54:20.969817 update_engine[1702]: I20250115 12:54:20.969620 1702 omaha_request_action.cc:272] Request: Jan 15 12:54:20.969817 update_engine[1702]: Jan 15 12:54:20.969817 update_engine[1702]: Jan 15 12:54:20.970279 update_engine[1702]: Jan 15 12:54:20.970279 update_engine[1702]: Jan 15 12:54:20.970279 update_engine[1702]: Jan 15 12:54:20.970279 update_engine[1702]: Jan 15 12:54:20.970279 update_engine[1702]: I20250115 12:54:20.969626 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:54:20.970279 update_engine[1702]: I20250115 12:54:20.969784 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:54:20.970279 update_engine[1702]: I20250115 12:54:20.969981 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:54:20.970679 locksmithd[1779]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 15 12:54:20.978639 update_engine[1702]: E20250115 12:54:20.978587 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:54:20.978752 update_engine[1702]: I20250115 12:54:20.978667 1702 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 12:54:20.978752 update_engine[1702]: I20250115 12:54:20.978677 1702 omaha_request_action.cc:617] Omaha request response: Jan 15 12:54:20.978752 update_engine[1702]: I20250115 12:54:20.978682 1702 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 12:54:20.978752 update_engine[1702]: I20250115 12:54:20.978686 1702 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 12:54:20.978752 update_engine[1702]: I20250115 12:54:20.978691 1702 update_attempter.cc:306] Processing Done. Jan 15 12:54:20.978752 update_engine[1702]: I20250115 12:54:20.978696 1702 update_attempter.cc:310] Error event sent. Jan 15 12:54:20.978752 update_engine[1702]: I20250115 12:54:20.978706 1702 update_check_scheduler.cc:74] Next update check in 43m30s Jan 15 12:54:20.979056 locksmithd[1779]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 15 12:54:21.158269 sshd[6392]: Accepted publickey for core from 10.200.16.10 port 58690 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:21.159665 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:21.163617 systemd-logind[1693]: New session 22 of user core. Jan 15 12:54:21.170270 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 12:54:21.582295 sshd[6392]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:21.585818 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:58690.service: Deactivated successfully. Jan 15 12:54:21.588695 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 12:54:21.590219 systemd-logind[1693]: Session 22 logged out. Waiting for processes to exit. Jan 15 12:54:21.591672 systemd-logind[1693]: Removed session 22. Jan 15 12:54:26.669251 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:58972.service - OpenSSH per-connection server daemon (10.200.16.10:58972). Jan 15 12:54:27.149043 sshd[6431]: Accepted publickey for core from 10.200.16.10 port 58972 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:27.150497 sshd[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:27.155251 systemd-logind[1693]: New session 23 of user core. Jan 15 12:54:27.159459 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 12:54:27.560371 sshd[6431]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:27.563789 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:58972.service: Deactivated successfully. Jan 15 12:54:27.566554 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 12:54:27.567547 systemd-logind[1693]: Session 23 logged out. Waiting for processes to exit. Jan 15 12:54:27.568461 systemd-logind[1693]: Removed session 23. Jan 15 12:54:32.645086 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:58984.service - OpenSSH per-connection server daemon (10.200.16.10:58984). Jan 15 12:54:33.089917 sshd[6458]: Accepted publickey for core from 10.200.16.10 port 58984 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:33.091468 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:33.096216 systemd-logind[1693]: New session 24 of user core. Jan 15 12:54:33.099197 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 12:54:33.482550 sshd[6458]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:33.485868 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:58984.service: Deactivated successfully. Jan 15 12:54:33.489500 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 12:54:33.491412 systemd-logind[1693]: Session 24 logged out. Waiting for processes to exit. Jan 15 12:54:33.492431 systemd-logind[1693]: Removed session 24. Jan 15 12:54:38.577328 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:44378.service - OpenSSH per-connection server daemon (10.200.16.10:44378). Jan 15 12:54:39.026433 sshd[6471]: Accepted publickey for core from 10.200.16.10 port 44378 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:39.027904 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:39.031708 systemd-logind[1693]: New session 25 of user core. Jan 15 12:54:39.037186 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 12:54:39.415235 sshd[6471]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:39.419193 systemd-logind[1693]: Session 25 logged out. Waiting for processes to exit. Jan 15 12:54:39.419742 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:44378.service: Deactivated successfully. Jan 15 12:54:39.422366 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 12:54:39.423759 systemd-logind[1693]: Removed session 25. Jan 15 12:54:44.505284 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:44384.service - OpenSSH per-connection server daemon (10.200.16.10:44384). Jan 15 12:54:44.982421 sshd[6488]: Accepted publickey for core from 10.200.16.10 port 44384 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:44.983934 sshd[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:44.987857 systemd-logind[1693]: New session 26 of user core. Jan 15 12:54:44.994181 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 12:54:45.398563 sshd[6488]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:45.402826 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:44384.service: Deactivated successfully. Jan 15 12:54:45.404759 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 12:54:45.407659 systemd-logind[1693]: Session 26 logged out. Waiting for processes to exit. Jan 15 12:54:45.408805 systemd-logind[1693]: Removed session 26. Jan 15 12:54:50.486797 systemd[1]: Started sshd@24-10.200.20.38:22-10.200.16.10:54822.service - OpenSSH per-connection server daemon (10.200.16.10:54822). Jan 15 12:54:50.966721 sshd[6519]: Accepted publickey for core from 10.200.16.10 port 54822 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:50.968170 sshd[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:50.972789 systemd-logind[1693]: New session 27 of user core. Jan 15 12:54:50.978201 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 12:54:51.382293 sshd[6519]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:51.385553 systemd[1]: sshd@24-10.200.20.38:22-10.200.16.10:54822.service: Deactivated successfully. Jan 15 12:54:51.388267 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 12:54:51.389672 systemd-logind[1693]: Session 27 logged out. Waiting for processes to exit. Jan 15 12:54:51.390656 systemd-logind[1693]: Removed session 27.